Oct 14 13:06:14.001074 master-1 systemd[1]: Starting Kubernetes Kubelet... Oct 14 13:06:14.651592 master-1 kubenswrapper[4740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:06:14.651592 master-1 kubenswrapper[4740]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 14 13:06:14.651592 master-1 kubenswrapper[4740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:06:14.651592 master-1 kubenswrapper[4740]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:06:14.651592 master-1 kubenswrapper[4740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 14 13:06:14.653167 master-1 kubenswrapper[4740]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:06:14.653167 master-1 kubenswrapper[4740]: I1014 13:06:14.652498 4740 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 14 13:06:14.657717 master-1 kubenswrapper[4740]: W1014 13:06:14.657657 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:06:14.657717 master-1 kubenswrapper[4740]: W1014 13:06:14.657692 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:06:14.657717 master-1 kubenswrapper[4740]: W1014 13:06:14.657704 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:06:14.657717 master-1 kubenswrapper[4740]: W1014 13:06:14.657713 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:06:14.657717 master-1 kubenswrapper[4740]: W1014 13:06:14.657724 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:06:14.657717 master-1 kubenswrapper[4740]: W1014 13:06:14.657733 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657758 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657768 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657777 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657786 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657795 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657803 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657811 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657819 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657829 4740 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657837 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657845 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657853 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657862 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657870 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657879 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657887 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657896 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657904 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657912 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:06:14.658160 master-1 kubenswrapper[4740]: W1014 13:06:14.657921 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657929 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657937 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657945 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657953 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657962 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657970 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657979 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657987 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.657995 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658004 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658012 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658021 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658032 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658045 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658056 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658066 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658076 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658084 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:06:14.659472 master-1 kubenswrapper[4740]: W1014 13:06:14.658092 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658101 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658113 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658124 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658133 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658143 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658152 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658161 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658170 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658178 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658188 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658197 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658205 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658214 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658223 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658260 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658269 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658277 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658285 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:06:14.660668 master-1 kubenswrapper[4740]: W1014 13:06:14.658294 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658303 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658311 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658319 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658332 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658345 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658357 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658371 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: W1014 13:06:14.658383 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658531 4740 flags.go:64] FLAG: --address="0.0.0.0" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658548 4740 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658563 4740 flags.go:64] FLAG: --anonymous-auth="true" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658575 4740 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658594 4740 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658620 4740 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658638 4740 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658653 4740 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658667 4740 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658679 4740 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658692 4740 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658702 4740 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658712 4740 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 14 13:06:14.661906 master-1 kubenswrapper[4740]: I1014 13:06:14.658722 4740 flags.go:64] FLAG: --cgroup-root="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658732 4740 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658745 4740 flags.go:64] FLAG: --client-ca-file="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658767 4740 flags.go:64] FLAG: --cloud-config="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658786 4740 flags.go:64] FLAG: --cloud-provider="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658799 4740 flags.go:64] FLAG: --cluster-dns="[]" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658818 4740 flags.go:64] FLAG: --cluster-domain="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658830 4740 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658844 4740 flags.go:64] FLAG: --config-dir="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658855 4740 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658867 4740 flags.go:64] FLAG: --container-log-max-files="5" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658881 4740 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658891 4740 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658901 4740 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658910 4740 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658920 4740 flags.go:64] FLAG: --contention-profiling="false" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658930 4740 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658939 4740 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658957 4740 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658967 4740 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658979 4740 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658989 4740 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.658999 4740 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.659023 4740 flags.go:64] FLAG: --enable-load-reader="false" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.659033 4740 flags.go:64] FLAG: --enable-server="true" Oct 14 13:06:14.662945 master-1 kubenswrapper[4740]: I1014 13:06:14.659042 4740 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659054 4740 flags.go:64] FLAG: --event-burst="100" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659065 4740 flags.go:64] FLAG: --event-qps="50" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659075 4740 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659085 4740 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659095 4740 flags.go:64] FLAG: --eviction-hard="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659106 4740 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659116 4740 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659126 4740 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659136 4740 flags.go:64] FLAG: --eviction-soft="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659145 4740 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659155 4740 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659164 4740 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659173 4740 flags.go:64] FLAG: --experimental-mounter-path="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659183 4740 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659192 4740 flags.go:64] FLAG: --fail-swap-on="true" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659202 4740 flags.go:64] FLAG: --feature-gates="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659213 4740 flags.go:64] FLAG: --file-check-frequency="20s" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659224 4740 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659271 4740 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659281 4740 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659291 4740 flags.go:64] FLAG: --healthz-port="10248" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659300 4740 flags.go:64] FLAG: --help="false" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659310 4740 flags.go:64] FLAG: --hostname-override="" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659320 4740 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659333 4740 flags.go:64] FLAG: --http-check-frequency="20s" Oct 14 13:06:14.664096 master-1 kubenswrapper[4740]: I1014 13:06:14.659343 4740 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659353 4740 flags.go:64] FLAG: --image-credential-provider-config="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659364 4740 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659378 4740 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659391 4740 flags.go:64] FLAG: --image-service-endpoint="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659403 4740 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659414 4740 flags.go:64] FLAG: --kube-api-burst="100" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659425 4740 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659435 4740 flags.go:64] FLAG: --kube-api-qps="50" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659445 4740 flags.go:64] FLAG: --kube-reserved="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659455 4740 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659464 4740 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659475 4740 flags.go:64] FLAG: --kubelet-cgroups="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659484 4740 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659495 4740 flags.go:64] FLAG: --lock-file="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659504 4740 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659514 4740 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659524 4740 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659539 4740 flags.go:64] FLAG: --log-json-split-stream="false" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659549 4740 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659558 4740 flags.go:64] FLAG: --log-text-split-stream="false" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659568 4740 flags.go:64] FLAG: --logging-format="text" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659577 4740 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659587 4740 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659597 4740 flags.go:64] FLAG: --manifest-url="" Oct 14 13:06:14.665718 master-1 kubenswrapper[4740]: I1014 13:06:14.659606 4740 flags.go:64] FLAG: --manifest-url-header="" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659619 4740 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659629 4740 flags.go:64] FLAG: --max-open-files="1000000" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659640 4740 flags.go:64] FLAG: --max-pods="110" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659650 4740 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659660 4740 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659673 4740 flags.go:64] FLAG: --memory-manager-policy="None" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659683 4740 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659693 4740 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659702 4740 flags.go:64] FLAG: --node-ip="192.168.34.11" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659713 4740 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659737 4740 flags.go:64] FLAG: --node-status-max-images="50" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659747 4740 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659757 4740 flags.go:64] FLAG: --oom-score-adj="-999" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659777 4740 flags.go:64] FLAG: --pod-cidr="" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659787 4740 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d66b9dbe1d071d7372c477a78835fb65b48ea82db00d23e9086af5cfcb194ad" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659801 4740 flags.go:64] FLAG: --pod-manifest-path="" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659810 4740 flags.go:64] FLAG: --pod-max-pids="-1" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659820 4740 flags.go:64] FLAG: --pods-per-core="0" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659830 4740 flags.go:64] FLAG: --port="10250" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659840 4740 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659849 4740 flags.go:64] FLAG: --provider-id="" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659859 4740 flags.go:64] FLAG: --qos-reserved="" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659869 4740 flags.go:64] FLAG: --read-only-port="10255" Oct 14 13:06:14.666965 master-1 kubenswrapper[4740]: I1014 13:06:14.659879 4740 flags.go:64] FLAG: --register-node="true" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659888 4740 flags.go:64] FLAG: --register-schedulable="true" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659898 4740 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659923 4740 flags.go:64] FLAG: --registry-burst="10" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659932 4740 flags.go:64] FLAG: --registry-qps="5" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659942 4740 flags.go:64] FLAG: --reserved-cpus="" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659951 4740 flags.go:64] FLAG: --reserved-memory="" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659963 4740 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659973 4740 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659982 4740 flags.go:64] FLAG: --rotate-certificates="false" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.659992 4740 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660001 4740 flags.go:64] FLAG: --runonce="false" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660011 4740 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660021 4740 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660031 4740 flags.go:64] FLAG: --seccomp-default="false" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660046 4740 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660055 4740 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660066 4740 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660075 4740 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660087 4740 flags.go:64] FLAG: --storage-driver-password="root" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660099 4740 flags.go:64] FLAG: --storage-driver-secure="false" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660111 4740 flags.go:64] FLAG: --storage-driver-table="stats" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660123 4740 flags.go:64] FLAG: --storage-driver-user="root" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660135 4740 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660147 4740 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 14 13:06:14.668083 master-1 kubenswrapper[4740]: I1014 13:06:14.660159 4740 flags.go:64] FLAG: --system-cgroups="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660173 4740 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660191 4740 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660201 4740 flags.go:64] FLAG: --tls-cert-file="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660210 4740 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660223 4740 flags.go:64] FLAG: --tls-min-version="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660262 4740 flags.go:64] FLAG: --tls-private-key-file="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660272 4740 flags.go:64] FLAG: --topology-manager-policy="none" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660282 4740 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660291 4740 flags.go:64] FLAG: --topology-manager-scope="container" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660301 4740 flags.go:64] FLAG: --v="2" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660313 4740 flags.go:64] FLAG: --version="false" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660325 4740 flags.go:64] FLAG: --vmodule="" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660336 4740 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: I1014 13:06:14.660347 4740 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660578 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660592 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660602 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660611 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660619 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660628 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660637 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660650 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:06:14.669459 master-1 kubenswrapper[4740]: W1014 13:06:14.660659 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660667 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660676 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660685 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660694 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660703 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660712 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660721 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660729 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660737 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660746 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660756 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660764 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660772 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660783 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660792 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660800 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660809 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660817 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:06:14.670836 master-1 kubenswrapper[4740]: W1014 13:06:14.660825 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660835 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660846 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660855 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660863 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660872 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660880 4740 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660891 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660904 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660914 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660923 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660932 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660972 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660984 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.660993 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.661003 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.661013 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.661022 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:06:14.671796 master-1 kubenswrapper[4740]: W1014 13:06:14.661031 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661041 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661050 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661059 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661068 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661077 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661086 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661094 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661106 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661114 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661122 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661131 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661139 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661174 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661184 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661193 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661202 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661210 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661218 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661256 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:06:14.672832 master-1 kubenswrapper[4740]: W1014 13:06:14.661265 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.661274 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.661282 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.661290 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.661299 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.661307 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.661319 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: I1014 13:06:14.662127 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: I1014 13:06:14.673205 4740 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: I1014 13:06:14.673273 4740 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673429 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673446 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673457 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673467 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673477 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673485 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:06:14.673967 master-1 kubenswrapper[4740]: W1014 13:06:14.673494 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673503 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673511 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673520 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673531 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673543 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673552 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673561 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673569 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673578 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673587 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673595 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673606 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673617 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673627 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673637 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673646 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673654 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673663 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:06:14.674741 master-1 kubenswrapper[4740]: W1014 13:06:14.673671 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673680 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673688 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673696 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673705 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673713 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673723 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673731 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673739 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673751 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673762 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673772 4740 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673782 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673791 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673801 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673809 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673818 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673827 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673835 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673844 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:06:14.675884 master-1 kubenswrapper[4740]: W1014 13:06:14.673852 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673861 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673869 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673878 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673886 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673895 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673903 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673911 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673920 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673930 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673941 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673951 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673962 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673974 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673985 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.673996 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.674007 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.674018 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.674032 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.674041 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:06:14.677171 master-1 kubenswrapper[4740]: W1014 13:06:14.674049 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674058 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674067 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674076 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674085 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674095 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674104 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: I1014 13:06:14.674117 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674398 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674415 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674426 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674435 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674444 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674456 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674467 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:06:14.678123 master-1 kubenswrapper[4740]: W1014 13:06:14.674478 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674488 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674497 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674506 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674515 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674527 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674537 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674546 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674554 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674563 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674571 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674580 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674589 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674597 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674606 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674616 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674625 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674633 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674641 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674650 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:06:14.678932 master-1 kubenswrapper[4740]: W1014 13:06:14.674659 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674667 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674676 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674685 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674693 4740 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674701 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674711 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674720 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674728 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674737 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674746 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674754 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674763 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674772 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674780 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674789 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674797 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674806 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674814 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674823 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:06:14.680108 master-1 kubenswrapper[4740]: W1014 13:06:14.674831 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674839 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674848 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674856 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674864 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674873 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674882 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674894 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674904 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674913 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674921 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674929 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674938 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674946 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674954 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674963 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674972 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674980 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674989 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:06:14.681158 master-1 kubenswrapper[4740]: W1014 13:06:14.674997 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: W1014 13:06:14.675005 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: W1014 13:06:14.675013 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: W1014 13:06:14.675023 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: W1014 13:06:14.675032 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: W1014 13:06:14.675040 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: I1014 13:06:14.675052 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: I1014 13:06:14.676179 4740 server.go:940] "Client rotation is on, will bootstrap in background" Oct 14 13:06:14.682356 master-1 kubenswrapper[4740]: I1014 13:06:14.681680 4740 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Oct 14 13:06:14.683526 master-1 kubenswrapper[4740]: I1014 13:06:14.683473 4740 server.go:997] "Starting client certificate rotation" Oct 14 13:06:14.683526 master-1 kubenswrapper[4740]: I1014 13:06:14.683516 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 14 13:06:14.683888 master-1 kubenswrapper[4740]: I1014 13:06:14.683836 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Oct 14 13:06:14.713623 master-1 kubenswrapper[4740]: I1014 13:06:14.713530 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 13:06:14.717377 master-1 kubenswrapper[4740]: I1014 13:06:14.716725 4740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 13:06:14.742600 master-1 kubenswrapper[4740]: I1014 13:06:14.742526 4740 log.go:25] "Validated CRI v1 runtime API" Oct 14 13:06:14.750827 master-1 kubenswrapper[4740]: I1014 13:06:14.750746 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 14 13:06:14.752298 master-1 kubenswrapper[4740]: I1014 13:06:14.752207 4740 log.go:25] "Validated CRI v1 image API" Oct 14 13:06:14.754838 master-1 kubenswrapper[4740]: I1014 13:06:14.754753 4740 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 14 13:06:14.759398 master-1 kubenswrapper[4740]: I1014 13:06:14.759335 4740 fs.go:135] Filesystem UUIDs: map[1e761e60-6f4a-4eca-af56-216c943c57f6:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Oct 14 13:06:14.759502 master-1 kubenswrapper[4740]: I1014 13:06:14.759385 4740 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Oct 14 13:06:14.784458 master-1 kubenswrapper[4740]: I1014 13:06:14.783848 4740 manager.go:217] Machine: {Timestamp:2025-10-14 13:06:14.782171238 +0000 UTC m=+0.592460607 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514157568 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:1cf8d95a8b1a48698f4574ddaaf3cece SystemUUID:1cf8d95a-8b1a-4869-8f45-74ddaaf3cece BootID:002428e8-0778-40ec-a1c5-a4ae210e0314 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257078784 Type:vfs Inodes:6166279 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8b:55:52 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:3e:8b:55:52 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:53:6b:43 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:6d:6a:c3 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:d6:6e:03:b8:da:65 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514157568 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 14 13:06:14.784458 master-1 kubenswrapper[4740]: I1014 13:06:14.784355 4740 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 14 13:06:14.784873 master-1 kubenswrapper[4740]: I1014 13:06:14.784629 4740 manager.go:233] Version: {KernelVersion:5.14.0-427.91.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202509241235-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 14 13:06:14.785288 master-1 kubenswrapper[4740]: I1014 13:06:14.785205 4740 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 14 13:06:14.785643 master-1 kubenswrapper[4740]: I1014 13:06:14.785575 4740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 14 13:06:14.787402 master-1 kubenswrapper[4740]: I1014 13:06:14.785630 4740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-1","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 14 13:06:14.788890 master-1 kubenswrapper[4740]: I1014 13:06:14.788781 4740 topology_manager.go:138] "Creating topology manager with none policy" Oct 14 13:06:14.788990 master-1 kubenswrapper[4740]: I1014 13:06:14.788932 4740 container_manager_linux.go:303] "Creating device plugin manager" Oct 14 13:06:14.789471 master-1 kubenswrapper[4740]: I1014 13:06:14.789419 4740 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 13:06:14.789471 master-1 kubenswrapper[4740]: I1014 13:06:14.789468 4740 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 13:06:14.790396 master-1 kubenswrapper[4740]: I1014 13:06:14.790353 4740 state_mem.go:36] "Initialized new in-memory state store" Oct 14 13:06:14.790561 master-1 kubenswrapper[4740]: I1014 13:06:14.790522 4740 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 14 13:06:14.794042 master-1 kubenswrapper[4740]: I1014 13:06:14.793999 4740 kubelet.go:418] "Attempting to sync node with API server" Oct 14 13:06:14.794042 master-1 kubenswrapper[4740]: I1014 13:06:14.794037 4740 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 14 13:06:14.794266 master-1 kubenswrapper[4740]: I1014 13:06:14.794185 4740 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 14 13:06:14.794266 master-1 kubenswrapper[4740]: I1014 13:06:14.794219 4740 kubelet.go:324] "Adding apiserver pod source" Oct 14 13:06:14.794397 master-1 kubenswrapper[4740]: I1014 13:06:14.794276 4740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 14 13:06:14.800055 master-1 kubenswrapper[4740]: I1014 13:06:14.799977 4740 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.12-3.rhaos4.18.gitdc59c78.el9" apiVersion="v1" Oct 14 13:06:14.802517 master-1 kubenswrapper[4740]: I1014 13:06:14.802472 4740 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 14 13:06:14.802785 master-1 kubenswrapper[4740]: I1014 13:06:14.802746 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 14 13:06:14.802785 master-1 kubenswrapper[4740]: I1014 13:06:14.802784 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802799 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802812 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802848 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802862 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802875 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802897 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802913 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 14 13:06:14.802923 master-1 kubenswrapper[4740]: I1014 13:06:14.802926 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 14 13:06:14.803398 master-1 kubenswrapper[4740]: I1014 13:06:14.802944 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 14 13:06:14.803613 master-1 kubenswrapper[4740]: I1014 13:06:14.803569 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 14 13:06:14.805681 master-1 kubenswrapper[4740]: I1014 13:06:14.805636 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 14 13:06:14.806439 master-1 kubenswrapper[4740]: I1014 13:06:14.806348 4740 server.go:1280] "Started kubelet" Oct 14 13:06:14.808065 master-1 kubenswrapper[4740]: I1014 13:06:14.807905 4740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 14 13:06:14.808006 master-1 systemd[1]: Started Kubernetes Kubelet. Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: I1014 13:06:14.808069 4740 server_v1.go:47] "podresources" method="list" useActivePods=true Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: W1014 13:06:14.807998 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: W1014 13:06:14.808095 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-1" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: E1014 13:06:14.808340 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-1\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: E1014 13:06:14.808347 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: I1014 13:06:14.808797 4740 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 14 13:06:14.809203 master-1 kubenswrapper[4740]: I1014 13:06:14.808995 4740 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 14 13:06:14.810713 master-1 kubenswrapper[4740]: I1014 13:06:14.810649 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 14 13:06:14.810713 master-1 kubenswrapper[4740]: I1014 13:06:14.810695 4740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 14 13:06:14.810948 master-1 kubenswrapper[4740]: I1014 13:06:14.810885 4740 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 14 13:06:14.810948 master-1 kubenswrapper[4740]: I1014 13:06:14.810928 4740 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 14 13:06:14.811109 master-1 kubenswrapper[4740]: I1014 13:06:14.810994 4740 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Oct 14 13:06:14.811109 master-1 kubenswrapper[4740]: E1014 13:06:14.811035 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:14.811109 master-1 kubenswrapper[4740]: I1014 13:06:14.811091 4740 reconstruct.go:97] "Volume reconstruction finished" Oct 14 13:06:14.811360 master-1 kubenswrapper[4740]: I1014 13:06:14.811118 4740 reconciler.go:26] "Reconciler: start to sync state" Oct 14 13:06:14.812494 master-1 kubenswrapper[4740]: I1014 13:06:14.812444 4740 server.go:449] "Adding debug handlers to kubelet server" Oct 14 13:06:14.818218 master-1 kubenswrapper[4740]: E1014 13:06:14.817980 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 14 13:06:14.818452 master-1 kubenswrapper[4740]: I1014 13:06:14.818409 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:14.818572 master-1 kubenswrapper[4740]: W1014 13:06:14.818534 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:14.818670 master-1 kubenswrapper[4740]: E1014 13:06:14.818593 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:14.819168 master-1 kubenswrapper[4740]: I1014 13:06:14.819118 4740 factory.go:55] Registering systemd factory Oct 14 13:06:14.819168 master-1 kubenswrapper[4740]: I1014 13:06:14.819160 4740 factory.go:221] Registration of the systemd container factory successfully Oct 14 13:06:14.822084 master-1 kubenswrapper[4740]: E1014 13:06:14.818941 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d42b7171 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.806303089 +0000 UTC m=+0.616592458,LastTimestamp:2025-10-14 13:06:14.806303089 +0000 UTC m=+0.616592458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:14.822549 master-1 kubenswrapper[4740]: I1014 13:06:14.822335 4740 factory.go:153] Registering CRI-O factory Oct 14 13:06:14.822672 master-1 kubenswrapper[4740]: I1014 13:06:14.822556 4740 factory.go:221] Registration of the crio container factory successfully Oct 14 13:06:14.822745 master-1 kubenswrapper[4740]: I1014 13:06:14.822699 4740 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 14 13:06:14.822745 master-1 kubenswrapper[4740]: I1014 13:06:14.822741 4740 factory.go:103] Registering Raw factory Oct 14 13:06:14.822857 master-1 kubenswrapper[4740]: I1014 13:06:14.822770 4740 manager.go:1196] Started watching for new ooms in manager Oct 14 13:06:14.823871 master-1 kubenswrapper[4740]: I1014 13:06:14.823820 4740 manager.go:319] Starting recovery of all containers Oct 14 13:06:14.838555 master-1 kubenswrapper[4740]: E1014 13:06:14.838484 4740 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Oct 14 13:06:14.853019 master-1 kubenswrapper[4740]: I1014 13:06:14.852970 4740 manager.go:324] Recovery completed Oct 14 13:06:14.872628 master-1 kubenswrapper[4740]: I1014 13:06:14.872558 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:14.873821 master-1 kubenswrapper[4740]: I1014 13:06:14.873770 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:14.873821 master-1 kubenswrapper[4740]: I1014 13:06:14.873812 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:14.873821 master-1 kubenswrapper[4740]: I1014 13:06:14.873825 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:14.874926 master-1 kubenswrapper[4740]: I1014 13:06:14.874886 4740 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 14 13:06:14.874926 master-1 kubenswrapper[4740]: I1014 13:06:14.874907 4740 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 14 13:06:14.874926 master-1 kubenswrapper[4740]: I1014 13:06:14.874930 4740 state_mem.go:36] "Initialized new in-memory state store" Oct 14 13:06:14.876993 master-1 kubenswrapper[4740]: E1014 13:06:14.876835 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:14.878828 master-1 kubenswrapper[4740]: I1014 13:06:14.878790 4740 policy_none.go:49] "None policy: Start" Oct 14 13:06:14.879649 master-1 kubenswrapper[4740]: I1014 13:06:14.879600 4740 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 14 13:06:14.879713 master-1 kubenswrapper[4740]: I1014 13:06:14.879691 4740 state_mem.go:35] "Initializing new in-memory state store" Oct 14 13:06:14.887693 master-1 kubenswrapper[4740]: E1014 13:06:14.887269 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:14.898896 master-1 kubenswrapper[4740]: E1014 13:06:14.898585 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:14.911787 master-1 kubenswrapper[4740]: E1014 13:06:14.911688 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:14.941969 master-1 kubenswrapper[4740]: I1014 13:06:14.939381 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 14 13:06:14.942670 master-1 kubenswrapper[4740]: I1014 13:06:14.942208 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 14 13:06:14.942670 master-1 kubenswrapper[4740]: I1014 13:06:14.942458 4740 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 14 13:06:14.942670 master-1 kubenswrapper[4740]: I1014 13:06:14.942510 4740 kubelet.go:2335] "Starting kubelet main sync loop" Oct 14 13:06:14.942670 master-1 kubenswrapper[4740]: E1014 13:06:14.942591 4740 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 14 13:06:14.953831 master-1 kubenswrapper[4740]: I1014 13:06:14.953775 4740 manager.go:334] "Starting Device Plugin manager" Oct 14 13:06:14.953831 master-1 kubenswrapper[4740]: I1014 13:06:14.953830 4740 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 14 13:06:14.954002 master-1 kubenswrapper[4740]: I1014 13:06:14.953846 4740 server.go:79] "Starting device plugin registration server" Oct 14 13:06:14.954338 master-1 kubenswrapper[4740]: I1014 13:06:14.954307 4740 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 14 13:06:14.954422 master-1 kubenswrapper[4740]: I1014 13:06:14.954332 4740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 14 13:06:14.954531 master-1 kubenswrapper[4740]: I1014 13:06:14.954491 4740 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 14 13:06:14.955150 master-1 kubenswrapper[4740]: I1014 13:06:14.954748 4740 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 14 13:06:14.955150 master-1 kubenswrapper[4740]: I1014 13:06:14.954785 4740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 14 13:06:14.956071 master-1 kubenswrapper[4740]: W1014 13:06:14.955500 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 14 13:06:14.956407 master-1 kubenswrapper[4740]: E1014 13:06:14.956155 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:14.956599 master-1 kubenswrapper[4740]: E1014 13:06:14.956481 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-1\" not found" Oct 14 13:06:14.972410 master-1 kubenswrapper[4740]: E1014 13:06:14.972126 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60dd3c1150 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.958387536 +0000 UTC m=+0.768676905,LastTimestamp:2025-10-14 13:06:14.958387536 +0000 UTC m=+0.768676905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.029538 master-1 kubenswrapper[4740]: E1014 13:06:15.029480 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 14 13:06:15.043699 master-1 kubenswrapper[4740]: I1014 13:06:15.043615 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-1"] Oct 14 13:06:15.043842 master-1 kubenswrapper[4740]: I1014 13:06:15.043773 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:15.045164 master-1 kubenswrapper[4740]: I1014 13:06:15.045086 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:15.045164 master-1 kubenswrapper[4740]: I1014 13:06:15.045156 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:15.045694 master-1 kubenswrapper[4740]: I1014 13:06:15.045180 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:15.045694 master-1 kubenswrapper[4740]: I1014 13:06:15.045603 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.045694 master-1 kubenswrapper[4740]: I1014 13:06:15.045645 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:15.046680 master-1 kubenswrapper[4740]: I1014 13:06:15.046630 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:15.046680 master-1 kubenswrapper[4740]: I1014 13:06:15.046677 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:15.046836 master-1 kubenswrapper[4740]: I1014 13:06:15.046695 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:15.054977 master-1 kubenswrapper[4740]: I1014 13:06:15.054908 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:15.056767 master-1 kubenswrapper[4740]: I1014 13:06:15.056698 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:15.056767 master-1 kubenswrapper[4740]: I1014 13:06:15.056765 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:15.056928 master-1 kubenswrapper[4740]: I1014 13:06:15.056790 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:15.056928 master-1 kubenswrapper[4740]: I1014 13:06:15.056849 4740 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 14 13:06:15.059556 master-1 kubenswrapper[4740]: E1014 13:06:15.059125 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:15.045129758 +0000 UTC m=+0.855419117,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.068469 master-1 kubenswrapper[4740]: E1014 13:06:15.068394 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 14 13:06:15.068616 master-1 kubenswrapper[4740]: E1014 13:06:15.068449 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:15.045171261 +0000 UTC m=+0.855460630,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.078038 master-1 kubenswrapper[4740]: E1014 13:06:15.077842 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831d7b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:15.045193172 +0000 UTC m=+0.855482531,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.088271 master-1 kubenswrapper[4740]: E1014 13:06:15.088093 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:15.046659416 +0000 UTC m=+0.856948775,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.098638 master-1 kubenswrapper[4740]: E1014 13:06:15.098397 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:15.046688057 +0000 UTC m=+0.856977416,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.108872 master-1 kubenswrapper[4740]: E1014 13:06:15.108690 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831d7b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:15.046705368 +0000 UTC m=+0.856994727,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.113399 master-1 kubenswrapper[4740]: I1014 13:06:15.113330 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-etc-kube\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.119160 master-1 kubenswrapper[4740]: E1014 13:06:15.118998 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:15.056748069 +0000 UTC m=+0.867037438,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.129030 master-1 kubenswrapper[4740]: E1014 13:06:15.128844 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:15.056778891 +0000 UTC m=+0.867068260,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.137143 master-1 kubenswrapper[4740]: E1014 13:06:15.136949 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831d7b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:15.056804442 +0000 UTC m=+0.867093811,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.214782 master-1 kubenswrapper[4740]: I1014 13:06:15.214571 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-etc-kube\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.214782 master-1 kubenswrapper[4740]: I1014 13:06:15.214675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.215078 master-1 kubenswrapper[4740]: I1014 13:06:15.214789 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-etc-kube\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.269046 master-1 kubenswrapper[4740]: I1014 13:06:15.268906 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:15.270792 master-1 kubenswrapper[4740]: I1014 13:06:15.270700 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:15.270792 master-1 kubenswrapper[4740]: I1014 13:06:15.270760 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:15.270792 master-1 kubenswrapper[4740]: I1014 13:06:15.270777 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:15.271043 master-1 kubenswrapper[4740]: I1014 13:06:15.270830 4740 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 14 13:06:15.280427 master-1 kubenswrapper[4740]: E1014 13:06:15.280299 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:15.270739267 +0000 UTC m=+1.081028636,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.280534 master-1 kubenswrapper[4740]: E1014 13:06:15.280481 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 14 13:06:15.292459 master-1 kubenswrapper[4740]: E1014 13:06:15.292222 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:15.270771258 +0000 UTC m=+1.081060627,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.302458 master-1 kubenswrapper[4740]: E1014 13:06:15.302207 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831d7b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:15.270787769 +0000 UTC m=+1.081077128,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.315865 master-1 kubenswrapper[4740]: I1014 13:06:15.315792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.315990 master-1 kubenswrapper[4740]: I1014 13:06:15.315886 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3273b5dc02e0d8cacbf64fe78c713d50-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-1\" (UID: \"3273b5dc02e0d8cacbf64fe78c713d50\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.380217 master-1 kubenswrapper[4740]: I1014 13:06:15.380138 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" Oct 14 13:06:15.439810 master-1 kubenswrapper[4740]: E1014 13:06:15.439721 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 14 13:06:15.643511 master-1 kubenswrapper[4740]: W1014 13:06:15.643377 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-1" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 14 13:06:15.643511 master-1 kubenswrapper[4740]: E1014 13:06:15.643450 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-1\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:15.680973 master-1 kubenswrapper[4740]: I1014 13:06:15.680869 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:15.682221 master-1 kubenswrapper[4740]: I1014 13:06:15.682165 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:15.682335 master-1 kubenswrapper[4740]: I1014 13:06:15.682222 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:15.682335 master-1 kubenswrapper[4740]: I1014 13:06:15.682268 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:15.682335 master-1 kubenswrapper[4740]: I1014 13:06:15.682313 4740 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 14 13:06:15.694849 master-1 kubenswrapper[4740]: E1014 13:06:15.694656 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:15.682200052 +0000 UTC m=+1.492489421,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.694849 master-1 kubenswrapper[4740]: E1014 13:06:15.694847 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 14 13:06:15.704979 master-1 kubenswrapper[4740]: E1014 13:06:15.704819 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:15.68226142 +0000 UTC m=+1.492550789,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.714737 master-1 kubenswrapper[4740]: E1014 13:06:15.714583 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831d7b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:15.682277705 +0000 UTC m=+1.492567074,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:15.831868 master-1 kubenswrapper[4740]: I1014 13:06:15.831759 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:15.977617 master-1 kubenswrapper[4740]: W1014 13:06:15.977416 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 14 13:06:15.977617 master-1 kubenswrapper[4740]: E1014 13:06:15.977489 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:16.045482 master-1 kubenswrapper[4740]: W1014 13:06:16.045387 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:16.045482 master-1 kubenswrapper[4740]: E1014 13:06:16.045458 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:16.132342 master-1 kubenswrapper[4740]: W1014 13:06:16.132183 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 14 13:06:16.132638 master-1 kubenswrapper[4740]: E1014 13:06:16.132358 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:16.250610 master-1 kubenswrapper[4740]: E1014 13:06:16.250512 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="1.6s" Oct 14 13:06:16.427689 master-1 kubenswrapper[4740]: W1014 13:06:16.427572 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3273b5dc02e0d8cacbf64fe78c713d50.slice/crio-d44570b8bad682a5efc76bbb594dc8d2bfafe1cb7180ec6b4071243501ba420e WatchSource:0}: Error finding container d44570b8bad682a5efc76bbb594dc8d2bfafe1cb7180ec6b4071243501ba420e: Status 404 returned error can't find the container with id d44570b8bad682a5efc76bbb594dc8d2bfafe1cb7180ec6b4071243501ba420e Oct 14 13:06:16.432646 master-1 kubenswrapper[4740]: I1014 13:06:16.432594 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:06:16.443019 master-1 kubenswrapper[4740]: E1014 13:06:16.442855 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61351948f8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\",Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:16.432503032 +0000 UTC m=+2.242792401,LastTimestamp:2025-10-14 13:06:16.432503032 +0000 UTC m=+2.242792401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:16.495977 master-1 kubenswrapper[4740]: I1014 13:06:16.495897 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:16.497569 master-1 kubenswrapper[4740]: I1014 13:06:16.497519 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:16.497697 master-1 kubenswrapper[4740]: I1014 13:06:16.497577 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:16.497697 master-1 kubenswrapper[4740]: I1014 13:06:16.497622 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:16.497697 master-1 kubenswrapper[4740]: I1014 13:06:16.497668 4740 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 14 13:06:16.506523 master-1 kubenswrapper[4740]: E1014 13:06:16.506359 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 14 13:06:16.506523 master-1 kubenswrapper[4740]: E1014 13:06:16.506216 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:16.497555722 +0000 UTC m=+2.307845091,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:16.515932 master-1 kubenswrapper[4740]: E1014 13:06:16.515786 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:16.497589343 +0000 UTC m=+2.307878712,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:16.525266 master-1 kubenswrapper[4740]: E1014 13:06:16.525106 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831d7b3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831d7b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-1 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873831347 +0000 UTC m=+0.684120686,LastTimestamp:2025-10-14 13:06:16.497633174 +0000 UTC m=+2.307922543,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:16.828491 master-1 kubenswrapper[4740]: I1014 13:06:16.828337 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:16.951872 master-1 kubenswrapper[4740]: I1014 13:06:16.951669 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerStarted","Data":"d44570b8bad682a5efc76bbb594dc8d2bfafe1cb7180ec6b4071243501ba420e"} Oct 14 13:06:17.696787 master-1 kubenswrapper[4740]: W1014 13:06:17.696701 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-1" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 14 13:06:17.696787 master-1 kubenswrapper[4740]: E1014 13:06:17.696787 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-1\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:17.827562 master-1 kubenswrapper[4740]: I1014 13:06:17.827495 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:17.860034 master-1 kubenswrapper[4740]: E1014 13:06:17.859942 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-1\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="3.2s" Oct 14 13:06:18.106602 master-1 kubenswrapper[4740]: I1014 13:06:18.106495 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:18.107720 master-1 kubenswrapper[4740]: I1014 13:06:18.107688 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:18.107832 master-1 kubenswrapper[4740]: I1014 13:06:18.107732 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:18.107832 master-1 kubenswrapper[4740]: I1014 13:06:18.107748 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:18.107832 master-1 kubenswrapper[4740]: I1014 13:06:18.107784 4740 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 14 13:06:18.118650 master-1 kubenswrapper[4740]: E1014 13:06:18.118494 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d8315986\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d8315986 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873799046 +0000 UTC m=+0.684088385,LastTimestamp:2025-10-14 13:06:18.107715351 +0000 UTC m=+3.918004690,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:18.118922 master-1 kubenswrapper[4740]: E1014 13:06:18.118586 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-1" Oct 14 13:06:18.127888 master-1 kubenswrapper[4740]: E1014 13:06:18.127775 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"master-1.186e5d60d831ae69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-1.186e5d60d831ae69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-1,UID:master-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-1 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:14.873820777 +0000 UTC m=+0.684110116,LastTimestamp:2025-10-14 13:06:18.10774258 +0000 UTC m=+3.918031919,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:18.137366 master-1 kubenswrapper[4740]: E1014 13:06:18.137168 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d6199e9d2b6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" in 1.691s (1.691s including waiting). Image size: 458126368 bytes.,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:18.123891382 +0000 UTC m=+3.934180731,LastTimestamp:2025-10-14 13:06:18.123891382 +0000 UTC m=+3.934180731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:18.157349 master-1 kubenswrapper[4740]: W1014 13:06:18.157287 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 14 13:06:18.157547 master-1 kubenswrapper[4740]: E1014 13:06:18.157359 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:18.178953 master-1 kubenswrapper[4740]: W1014 13:06:18.178828 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 14 13:06:18.179138 master-1 kubenswrapper[4740]: E1014 13:06:18.178953 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:18.386070 master-1 kubenswrapper[4740]: E1014 13:06:18.385895 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61a8e9bf8a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:18.375544714 +0000 UTC m=+4.185834083,LastTimestamp:2025-10-14 13:06:18.375544714 +0000 UTC m=+4.185834083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:18.403856 master-1 kubenswrapper[4740]: E1014 13:06:18.403687 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61aa0b68c6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:18.394527942 +0000 UTC m=+4.204817311,LastTimestamp:2025-10-14 13:06:18.394527942 +0000 UTC m=+4.204817311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:18.614895 master-1 kubenswrapper[4740]: W1014 13:06:18.614809 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:18.614895 master-1 kubenswrapper[4740]: E1014 13:06:18.614858 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Oct 14 13:06:18.830818 master-1 kubenswrapper[4740]: I1014 13:06:18.830723 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:18.959459 master-1 kubenswrapper[4740]: I1014 13:06:18.959359 4740 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="3fe3310a228c32f750750b1b7b076de94e233aa6fe33fae9b5bc9dd3ff33224d" exitCode=0 Oct 14 13:06:18.959459 master-1 kubenswrapper[4740]: I1014 13:06:18.959439 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"3fe3310a228c32f750750b1b7b076de94e233aa6fe33fae9b5bc9dd3ff33224d"} Oct 14 13:06:18.960477 master-1 kubenswrapper[4740]: I1014 13:06:18.959530 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:18.961035 master-1 kubenswrapper[4740]: I1014 13:06:18.960995 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:18.961035 master-1 kubenswrapper[4740]: I1014 13:06:18.961042 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:18.961181 master-1 kubenswrapper[4740]: I1014 13:06:18.961062 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:18.987437 master-1 kubenswrapper[4740]: E1014 13:06:18.987213 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61ccb453dc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:18.976023516 +0000 UTC m=+4.786312885,LastTimestamp:2025-10-14 13:06:18.976023516 +0000 UTC m=+4.786312885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:19.213787 master-1 kubenswrapper[4740]: E1014 13:06:19.213632 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61da397f37 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:19.202854711 +0000 UTC m=+5.013144050,LastTimestamp:2025-10-14 13:06:19.202854711 +0000 UTC m=+5.013144050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:19.224127 master-1 kubenswrapper[4740]: E1014 13:06:19.223860 4740 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61daeabdf0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:19.21447064 +0000 UTC m=+5.024760009,LastTimestamp:2025-10-14 13:06:19.21447064 +0000 UTC m=+5.024760009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:19.828490 master-1 kubenswrapper[4740]: I1014 13:06:19.828368 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Oct 14 13:06:19.963939 master-1 kubenswrapper[4740]: I1014 13:06:19.963860 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/0.log" Oct 14 13:06:19.964938 master-1 kubenswrapper[4740]: I1014 13:06:19.964346 4740 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="b9588e02b4da5129e47414adbb671b34f66c2f1f6bcc0af9470d99e73fe63883" exitCode=1 Oct 14 13:06:19.964938 master-1 kubenswrapper[4740]: I1014 13:06:19.964402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"b9588e02b4da5129e47414adbb671b34f66c2f1f6bcc0af9470d99e73fe63883"} Oct 14 13:06:19.964938 master-1 kubenswrapper[4740]: I1014 13:06:19.964489 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:19.965589 master-1 kubenswrapper[4740]: I1014 13:06:19.965538 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:19.965589 master-1 kubenswrapper[4740]: I1014 13:06:19.965580 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:19.965589 master-1 kubenswrapper[4740]: I1014 13:06:19.965593 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:19.975307 master-1 kubenswrapper[4740]: I1014 13:06:19.975222 4740 scope.go:117] "RemoveContainer" containerID="b9588e02b4da5129e47414adbb671b34f66c2f1f6bcc0af9470d99e73fe63883" Oct 14 13:06:19.979907 master-1 kubenswrapper[4740]: I1014 13:06:19.979864 4740 csr.go:261] certificate signing request csr-gm8k8 is approved, waiting to be issued Oct 14 13:06:19.987797 master-1 kubenswrapper[4740]: E1014 13:06:19.987626 4740 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-1.186e5d61ccb453dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-1.186e5d61ccb453dc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-1,UID:3273b5dc02e0d8cacbf64fe78c713d50,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169\" already present on machine,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:06:18.976023516 +0000 UTC m=+4.786312885,LastTimestamp:2025-10-14 13:06:19.978731002 +0000 UTC m=+5.789020331,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,}" Oct 14 13:06:19.989829 master-1 kubenswrapper[4740]: I1014 13:06:19.989726 4740 csr.go:257] certificate signing request csr-gm8k8 is issued Oct 14 13:06:20.684566 master-1 kubenswrapper[4740]: I1014 13:06:20.684092 4740 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 14 13:06:20.838436 master-1 kubenswrapper[4740]: I1014 13:06:20.838366 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:20.856492 master-1 kubenswrapper[4740]: I1014 13:06:20.856435 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:20.920370 master-1 kubenswrapper[4740]: I1014 13:06:20.920210 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:20.968439 master-1 kubenswrapper[4740]: I1014 13:06:20.968332 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/1.log" Oct 14 13:06:20.969488 master-1 kubenswrapper[4740]: I1014 13:06:20.969446 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/0.log" Oct 14 13:06:20.969946 master-1 kubenswrapper[4740]: I1014 13:06:20.969914 4740 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="0692c39c4bc66b5c6f9657c6ed02cfa15a9f4bf9bcc16c1427481af5ffcfd4db" exitCode=1 Oct 14 13:06:20.970007 master-1 kubenswrapper[4740]: I1014 13:06:20.969966 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"0692c39c4bc66b5c6f9657c6ed02cfa15a9f4bf9bcc16c1427481af5ffcfd4db"} Oct 14 13:06:20.970048 master-1 kubenswrapper[4740]: I1014 13:06:20.970003 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:20.970048 master-1 kubenswrapper[4740]: I1014 13:06:20.970013 4740 scope.go:117] "RemoveContainer" containerID="b9588e02b4da5129e47414adbb671b34f66c2f1f6bcc0af9470d99e73fe63883" Oct 14 13:06:20.970893 master-1 kubenswrapper[4740]: I1014 13:06:20.970849 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:20.970987 master-1 kubenswrapper[4740]: I1014 13:06:20.970903 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:20.970987 master-1 kubenswrapper[4740]: I1014 13:06:20.970920 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:20.989611 master-1 kubenswrapper[4740]: I1014 13:06:20.989553 4740 scope.go:117] "RemoveContainer" containerID="0692c39c4bc66b5c6f9657c6ed02cfa15a9f4bf9bcc16c1427481af5ffcfd4db" Oct 14 13:06:20.989840 master-1 kubenswrapper[4740]: E1014 13:06:20.989744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 14 13:06:20.990519 master-1 kubenswrapper[4740]: I1014 13:06:20.990451 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-10-15 13:01:17 +0000 UTC, rotation deadline is 2025-10-15 08:38:46.590360872 +0000 UTC Oct 14 13:06:20.990519 master-1 kubenswrapper[4740]: I1014 13:06:20.990505 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h32m25.599859209s for next certificate rotation Oct 14 13:06:21.069330 master-1 kubenswrapper[4740]: E1014 13:06:21.069272 4740 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-1\" not found" node="master-1" Oct 14 13:06:21.179013 master-1 kubenswrapper[4740]: I1014 13:06:21.178935 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:21.179013 master-1 kubenswrapper[4740]: E1014 13:06:21.178981 4740 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-1" not found Oct 14 13:06:21.202100 master-1 kubenswrapper[4740]: I1014 13:06:21.202041 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:21.219264 master-1 kubenswrapper[4740]: I1014 13:06:21.219125 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:21.280838 master-1 kubenswrapper[4740]: I1014 13:06:21.280764 4740 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-1" not found Oct 14 13:06:21.319880 master-1 kubenswrapper[4740]: I1014 13:06:21.319770 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:21.322558 master-1 kubenswrapper[4740]: I1014 13:06:21.322464 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:21.322684 master-1 kubenswrapper[4740]: I1014 13:06:21.322570 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:21.322684 master-1 kubenswrapper[4740]: I1014 13:06:21.322593 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:21.322684 master-1 kubenswrapper[4740]: I1014 13:06:21.322637 4740 kubelet_node_status.go:76] "Attempting to register node" node="master-1" Oct 14 13:06:21.329985 master-1 kubenswrapper[4740]: I1014 13:06:21.329932 4740 kubelet_node_status.go:79] "Successfully registered node" node="master-1" Oct 14 13:06:21.329985 master-1 kubenswrapper[4740]: E1014 13:06:21.329976 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-1\": node \"master-1\" not found" Oct 14 13:06:21.353516 master-1 kubenswrapper[4740]: E1014 13:06:21.353462 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.454127 master-1 kubenswrapper[4740]: E1014 13:06:21.454012 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.554544 master-1 kubenswrapper[4740]: E1014 13:06:21.554464 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.655540 master-1 kubenswrapper[4740]: E1014 13:06:21.655392 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.732635 master-1 kubenswrapper[4740]: I1014 13:06:21.732579 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 14 13:06:21.756265 master-1 kubenswrapper[4740]: E1014 13:06:21.756129 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.814537 master-1 kubenswrapper[4740]: I1014 13:06:21.814355 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Oct 14 13:06:21.826935 master-1 kubenswrapper[4740]: I1014 13:06:21.826767 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Oct 14 13:06:21.857257 master-1 kubenswrapper[4740]: E1014 13:06:21.857149 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.957534 master-1 kubenswrapper[4740]: E1014 13:06:21.957491 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:21.974989 master-1 kubenswrapper[4740]: I1014 13:06:21.974931 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/1.log" Oct 14 13:06:21.975704 master-1 kubenswrapper[4740]: I1014 13:06:21.975654 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:06:21.976822 master-1 kubenswrapper[4740]: I1014 13:06:21.976760 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientMemory" Oct 14 13:06:21.976905 master-1 kubenswrapper[4740]: I1014 13:06:21.976849 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasNoDiskPressure" Oct 14 13:06:21.976905 master-1 kubenswrapper[4740]: I1014 13:06:21.976874 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeHasSufficientPID" Oct 14 13:06:21.977599 master-1 kubenswrapper[4740]: I1014 13:06:21.977547 4740 scope.go:117] "RemoveContainer" containerID="0692c39c4bc66b5c6f9657c6ed02cfa15a9f4bf9bcc16c1427481af5ffcfd4db" Oct 14 13:06:21.977951 master-1 kubenswrapper[4740]: E1014 13:06:21.977883 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 14 13:06:22.057854 master-1 kubenswrapper[4740]: E1014 13:06:22.057761 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.158798 master-1 kubenswrapper[4740]: E1014 13:06:22.158631 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.259300 master-1 kubenswrapper[4740]: E1014 13:06:22.259194 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.360048 master-1 kubenswrapper[4740]: E1014 13:06:22.359891 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.460709 master-1 kubenswrapper[4740]: E1014 13:06:22.460533 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.542828 master-1 kubenswrapper[4740]: I1014 13:06:22.542743 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 14 13:06:22.561322 master-1 kubenswrapper[4740]: E1014 13:06:22.561271 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.662382 master-1 kubenswrapper[4740]: E1014 13:06:22.662264 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.762608 master-1 kubenswrapper[4740]: E1014 13:06:22.762462 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.863209 master-1 kubenswrapper[4740]: E1014 13:06:22.863103 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:22.964074 master-1 kubenswrapper[4740]: E1014 13:06:22.963957 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.065000 master-1 kubenswrapper[4740]: E1014 13:06:23.064850 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.166217 master-1 kubenswrapper[4740]: E1014 13:06:23.166096 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.266518 master-1 kubenswrapper[4740]: E1014 13:06:23.266383 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.367764 master-1 kubenswrapper[4740]: E1014 13:06:23.367607 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.468713 master-1 kubenswrapper[4740]: E1014 13:06:23.468618 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.569865 master-1 kubenswrapper[4740]: E1014 13:06:23.569734 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.669669 master-1 kubenswrapper[4740]: I1014 13:06:23.669477 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 14 13:06:23.669993 master-1 kubenswrapper[4740]: E1014 13:06:23.669931 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-1\" not found" Oct 14 13:06:23.764420 master-1 kubenswrapper[4740]: I1014 13:06:23.764348 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 14 13:06:23.800346 master-1 kubenswrapper[4740]: I1014 13:06:23.800289 4740 apiserver.go:52] "Watching apiserver" Oct 14 13:06:23.804847 master-1 kubenswrapper[4740]: I1014 13:06:23.804766 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 14 13:06:23.804993 master-1 kubenswrapper[4740]: I1014 13:06:23.804926 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Oct 14 13:06:23.811426 master-1 kubenswrapper[4740]: I1014 13:06:23.811364 4740 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Oct 14 13:06:34.958735 master-1 kubenswrapper[4740]: I1014 13:06:34.958659 4740 scope.go:117] "RemoveContainer" containerID="0692c39c4bc66b5c6f9657c6ed02cfa15a9f4bf9bcc16c1427481af5ffcfd4db" Oct 14 13:06:34.963649 master-1 kubenswrapper[4740]: I1014 13:06:34.963219 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-1"] Oct 14 13:06:36.012877 master-1 kubenswrapper[4740]: I1014 13:06:36.012808 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/2.log" Oct 14 13:06:36.014024 master-1 kubenswrapper[4740]: I1014 13:06:36.013973 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/1.log" Oct 14 13:06:36.014683 master-1 kubenswrapper[4740]: I1014 13:06:36.014644 4740 generic.go:334] "Generic (PLEG): container finished" podID="3273b5dc02e0d8cacbf64fe78c713d50" containerID="2af5e3a0384dd47e97372edaae73a963b5e3d20bbc2876bda0008159a547d18c" exitCode=1 Oct 14 13:06:36.014730 master-1 kubenswrapper[4740]: I1014 13:06:36.014690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerDied","Data":"2af5e3a0384dd47e97372edaae73a963b5e3d20bbc2876bda0008159a547d18c"} Oct 14 13:06:36.014802 master-1 kubenswrapper[4740]: I1014 13:06:36.014763 4740 scope.go:117] "RemoveContainer" containerID="0692c39c4bc66b5c6f9657c6ed02cfa15a9f4bf9bcc16c1427481af5ffcfd4db" Oct 14 13:06:36.028587 master-1 kubenswrapper[4740]: I1014 13:06:36.028525 4740 scope.go:117] "RemoveContainer" containerID="2af5e3a0384dd47e97372edaae73a963b5e3d20bbc2876bda0008159a547d18c" Oct 14 13:06:36.029375 master-1 kubenswrapper[4740]: E1014 13:06:36.028874 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 14 13:06:37.019776 master-1 kubenswrapper[4740]: I1014 13:06:37.019696 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/2.log" Oct 14 13:06:37.021018 master-1 kubenswrapper[4740]: I1014 13:06:37.020929 4740 scope.go:117] "RemoveContainer" containerID="2af5e3a0384dd47e97372edaae73a963b5e3d20bbc2876bda0008159a547d18c" Oct 14 13:06:37.021308 master-1 kubenswrapper[4740]: E1014 13:06:37.021167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 14 13:06:40.335212 master-1 kubenswrapper[4740]: I1014 13:06:40.335073 4740 csr.go:261] certificate signing request csr-bvhbv is approved, waiting to be issued Oct 14 13:06:40.346062 master-1 kubenswrapper[4740]: I1014 13:06:40.345998 4740 csr.go:257] certificate signing request csr-bvhbv is issued Oct 14 13:06:41.347942 master-1 kubenswrapper[4740]: I1014 13:06:41.347800 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-15 13:01:17 +0000 UTC, rotation deadline is 2025-10-15 10:30:23.236604198 +0000 UTC Oct 14 13:06:41.347942 master-1 kubenswrapper[4740]: I1014 13:06:41.347864 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h23m41.888743535s for next certificate rotation Oct 14 13:06:42.349041 master-1 kubenswrapper[4740]: I1014 13:06:42.348834 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-10-15 13:01:17 +0000 UTC, rotation deadline is 2025-10-15 08:45:31.985944345 +0000 UTC Oct 14 13:06:42.349041 master-1 kubenswrapper[4740]: I1014 13:06:42.348964 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h38m49.636984054s for next certificate rotation Oct 14 13:06:49.943741 master-1 kubenswrapper[4740]: I1014 13:06:49.943632 4740 scope.go:117] "RemoveContainer" containerID="2af5e3a0384dd47e97372edaae73a963b5e3d20bbc2876bda0008159a547d18c" Oct 14 13:06:49.944544 master-1 kubenswrapper[4740]: E1014 13:06:49.944094 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podUID="3273b5dc02e0d8cacbf64fe78c713d50" Oct 14 13:06:52.232450 master-1 kubenswrapper[4740]: I1014 13:06:52.232358 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-854f54f8c9-t6kgz"] Oct 14 13:06:52.233106 master-1 kubenswrapper[4740]: I1014 13:06:52.232670 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.235608 master-1 kubenswrapper[4740]: I1014 13:06:52.235528 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 14 13:06:52.235772 master-1 kubenswrapper[4740]: I1014 13:06:52.235534 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 14 13:06:52.235772 master-1 kubenswrapper[4740]: I1014 13:06:52.235634 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 14 13:06:52.348925 master-1 kubenswrapper[4740]: I1014 13:06:52.348805 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/eae22243-e292-4623-90b4-dae431cf47dc-host-etc-kube\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.348925 master-1 kubenswrapper[4740]: I1014 13:06:52.348906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eae22243-e292-4623-90b4-dae431cf47dc-metrics-tls\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.349283 master-1 kubenswrapper[4740]: I1014 13:06:52.348952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq9d\" (UniqueName: \"kubernetes.io/projected/eae22243-e292-4623-90b4-dae431cf47dc-kube-api-access-bwq9d\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.450200 master-1 kubenswrapper[4740]: I1014 13:06:52.450114 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/eae22243-e292-4623-90b4-dae431cf47dc-host-etc-kube\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.450200 master-1 kubenswrapper[4740]: I1014 13:06:52.450182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eae22243-e292-4623-90b4-dae431cf47dc-metrics-tls\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.450200 master-1 kubenswrapper[4740]: I1014 13:06:52.450207 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwq9d\" (UniqueName: \"kubernetes.io/projected/eae22243-e292-4623-90b4-dae431cf47dc-kube-api-access-bwq9d\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.450509 master-1 kubenswrapper[4740]: I1014 13:06:52.450346 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/eae22243-e292-4623-90b4-dae431cf47dc-host-etc-kube\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.451119 master-1 kubenswrapper[4740]: I1014 13:06:52.451077 4740 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 14 13:06:52.458240 master-1 kubenswrapper[4740]: I1014 13:06:52.458195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/eae22243-e292-4623-90b4-dae431cf47dc-metrics-tls\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.474716 master-1 kubenswrapper[4740]: I1014 13:06:52.474624 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwq9d\" (UniqueName: \"kubernetes.io/projected/eae22243-e292-4623-90b4-dae431cf47dc-kube-api-access-bwq9d\") pod \"network-operator-854f54f8c9-t6kgz\" (UID: \"eae22243-e292-4623-90b4-dae431cf47dc\") " pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.555625 master-1 kubenswrapper[4740]: I1014 13:06:52.555515 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" Oct 14 13:06:52.569693 master-1 kubenswrapper[4740]: W1014 13:06:52.569638 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeae22243_e292_4623_90b4_dae431cf47dc.slice/crio-2510e59261915e4b10b4d146ba3039cd49726fafc2b389c97809a846599f90a3 WatchSource:0}: Error finding container 2510e59261915e4b10b4d146ba3039cd49726fafc2b389c97809a846599f90a3: Status 404 returned error can't find the container with id 2510e59261915e4b10b4d146ba3039cd49726fafc2b389c97809a846599f90a3 Oct 14 13:06:53.053047 master-1 kubenswrapper[4740]: I1014 13:06:53.052975 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" event={"ID":"eae22243-e292-4623-90b4-dae431cf47dc","Type":"ContainerStarted","Data":"2510e59261915e4b10b4d146ba3039cd49726fafc2b389c97809a846599f90a3"} Oct 14 13:06:57.063106 master-1 kubenswrapper[4740]: I1014 13:06:57.062999 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" event={"ID":"eae22243-e292-4623-90b4-dae431cf47dc","Type":"ContainerStarted","Data":"fe0263de8180e4d07e93f75cd5e428f39e11c32e6586b3b42beb63acb6a0eea2"} Oct 14 13:06:57.081408 master-1 kubenswrapper[4740]: I1014 13:06:57.081223 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" podStartSLOduration=1.638116592 podStartE2EDuration="5.081193906s" podCreationTimestamp="2025-10-14 13:06:52 +0000 UTC" firstStartedPulling="2025-10-14 13:06:52.572874111 +0000 UTC m=+38.383163450" lastFinishedPulling="2025-10-14 13:06:56.015951435 +0000 UTC m=+41.826240764" observedRunningTime="2025-10-14 13:06:57.080720775 +0000 UTC m=+42.891010134" watchObservedRunningTime="2025-10-14 13:06:57.081193906 +0000 UTC m=+42.891483275" Oct 14 13:06:58.983369 master-1 kubenswrapper[4740]: I1014 13:06:58.983150 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-jqdjc"] Oct 14 13:06:58.984345 master-1 kubenswrapper[4740]: I1014 13:06:58.983452 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:06:59.094852 master-1 kubenswrapper[4740]: I1014 13:06:59.094756 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/6a070bdb-dc12-4c00-874d-9ec5dbc16438-kube-api-access-hp45s\") pod \"mtu-prober-jqdjc\" (UID: \"6a070bdb-dc12-4c00-874d-9ec5dbc16438\") " pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:06:59.196098 master-1 kubenswrapper[4740]: I1014 13:06:59.196006 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/6a070bdb-dc12-4c00-874d-9ec5dbc16438-kube-api-access-hp45s\") pod \"mtu-prober-jqdjc\" (UID: \"6a070bdb-dc12-4c00-874d-9ec5dbc16438\") " pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:06:59.231114 master-1 kubenswrapper[4740]: I1014 13:06:59.231016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/6a070bdb-dc12-4c00-874d-9ec5dbc16438-kube-api-access-hp45s\") pod \"mtu-prober-jqdjc\" (UID: \"6a070bdb-dc12-4c00-874d-9ec5dbc16438\") " pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:06:59.300794 master-1 kubenswrapper[4740]: I1014 13:06:59.300645 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:06:59.320039 master-1 kubenswrapper[4740]: W1014 13:06:59.319957 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a070bdb_dc12_4c00_874d_9ec5dbc16438.slice/crio-07c54872ef4978974730a77a334e2bcee36699557dd6a47a7aa140cb6f3c789d WatchSource:0}: Error finding container 07c54872ef4978974730a77a334e2bcee36699557dd6a47a7aa140cb6f3c789d: Status 404 returned error can't find the container with id 07c54872ef4978974730a77a334e2bcee36699557dd6a47a7aa140cb6f3c789d Oct 14 13:07:00.073371 master-1 kubenswrapper[4740]: I1014 13:07:00.073016 4740 generic.go:334] "Generic (PLEG): container finished" podID="6a070bdb-dc12-4c00-874d-9ec5dbc16438" containerID="3766442c27bb97fdb3172d5d35ef57eed36dc9e7696554f7a70c82794900b102" exitCode=0 Oct 14 13:07:00.073371 master-1 kubenswrapper[4740]: I1014 13:07:00.073140 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqdjc" event={"ID":"6a070bdb-dc12-4c00-874d-9ec5dbc16438","Type":"ContainerDied","Data":"3766442c27bb97fdb3172d5d35ef57eed36dc9e7696554f7a70c82794900b102"} Oct 14 13:07:00.074178 master-1 kubenswrapper[4740]: I1014 13:07:00.073391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqdjc" event={"ID":"6a070bdb-dc12-4c00-874d-9ec5dbc16438","Type":"ContainerStarted","Data":"07c54872ef4978974730a77a334e2bcee36699557dd6a47a7aa140cb6f3c789d"} Oct 14 13:07:01.096147 master-1 kubenswrapper[4740]: I1014 13:07:01.096102 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:07:01.208374 master-1 kubenswrapper[4740]: I1014 13:07:01.208295 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/6a070bdb-dc12-4c00-874d-9ec5dbc16438-kube-api-access-hp45s\") pod \"6a070bdb-dc12-4c00-874d-9ec5dbc16438\" (UID: \"6a070bdb-dc12-4c00-874d-9ec5dbc16438\") " Oct 14 13:07:01.212823 master-1 kubenswrapper[4740]: I1014 13:07:01.212769 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a070bdb-dc12-4c00-874d-9ec5dbc16438-kube-api-access-hp45s" (OuterVolumeSpecName: "kube-api-access-hp45s") pod "6a070bdb-dc12-4c00-874d-9ec5dbc16438" (UID: "6a070bdb-dc12-4c00-874d-9ec5dbc16438"). InnerVolumeSpecName "kube-api-access-hp45s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:07:01.308968 master-1 kubenswrapper[4740]: I1014 13:07:01.308852 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/6a070bdb-dc12-4c00-874d-9ec5dbc16438-kube-api-access-hp45s\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:02.079154 master-1 kubenswrapper[4740]: I1014 13:07:02.079057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqdjc" event={"ID":"6a070bdb-dc12-4c00-874d-9ec5dbc16438","Type":"ContainerDied","Data":"07c54872ef4978974730a77a334e2bcee36699557dd6a47a7aa140cb6f3c789d"} Oct 14 13:07:02.079154 master-1 kubenswrapper[4740]: I1014 13:07:02.079120 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqdjc" Oct 14 13:07:02.079543 master-1 kubenswrapper[4740]: I1014 13:07:02.079128 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c54872ef4978974730a77a334e2bcee36699557dd6a47a7aa140cb6f3c789d" Oct 14 13:07:04.004944 master-1 kubenswrapper[4740]: I1014 13:07:04.004846 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-jqdjc"] Oct 14 13:07:04.007426 master-1 kubenswrapper[4740]: I1014 13:07:04.007375 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-jqdjc"] Oct 14 13:07:04.944756 master-1 kubenswrapper[4740]: I1014 13:07:04.944632 4740 scope.go:117] "RemoveContainer" containerID="2af5e3a0384dd47e97372edaae73a963b5e3d20bbc2876bda0008159a547d18c" Oct 14 13:07:04.950344 master-1 kubenswrapper[4740]: I1014 13:07:04.950263 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a070bdb-dc12-4c00-874d-9ec5dbc16438" path="/var/lib/kubelet/pods/6a070bdb-dc12-4c00-874d-9ec5dbc16438/volumes" Oct 14 13:07:06.091836 master-1 kubenswrapper[4740]: I1014 13:07:06.091765 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-1_3273b5dc02e0d8cacbf64fe78c713d50/kube-rbac-proxy-crio/2.log" Oct 14 13:07:06.092629 master-1 kubenswrapper[4740]: I1014 13:07:06.092531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" event={"ID":"3273b5dc02e0d8cacbf64fe78c713d50","Type":"ContainerStarted","Data":"7c8f7eeadaee6600aacf849c2b965447f5db203bfc54d4eeb6745b7c82af7a4c"} Oct 14 13:07:08.866167 master-1 kubenswrapper[4740]: I1014 13:07:08.866001 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-1" podStartSLOduration=34.865971152 podStartE2EDuration="34.865971152s" podCreationTimestamp="2025-10-14 13:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:07:06.109578273 +0000 UTC m=+51.919867642" watchObservedRunningTime="2025-10-14 13:07:08.865971152 +0000 UTC m=+54.676260511" Oct 14 13:07:08.866917 master-1 kubenswrapper[4740]: I1014 13:07:08.866395 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tq8hl"] Oct 14 13:07:08.866917 master-1 kubenswrapper[4740]: E1014 13:07:08.866464 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a070bdb-dc12-4c00-874d-9ec5dbc16438" containerName="prober" Oct 14 13:07:08.866917 master-1 kubenswrapper[4740]: I1014 13:07:08.866483 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a070bdb-dc12-4c00-874d-9ec5dbc16438" containerName="prober" Oct 14 13:07:08.866917 master-1 kubenswrapper[4740]: I1014 13:07:08.866513 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a070bdb-dc12-4c00-874d-9ec5dbc16438" containerName="prober" Oct 14 13:07:08.866917 master-1 kubenswrapper[4740]: I1014 13:07:08.866782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.870906 master-1 kubenswrapper[4740]: I1014 13:07:08.870842 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 14 13:07:08.871025 master-1 kubenswrapper[4740]: I1014 13:07:08.870961 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 14 13:07:08.871338 master-1 kubenswrapper[4740]: I1014 13:07:08.871262 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 14 13:07:08.871433 master-1 kubenswrapper[4740]: I1014 13:07:08.871410 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 14 13:07:08.955623 master-1 kubenswrapper[4740]: I1014 13:07:08.955539 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-daemon-config\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.955623 master-1 kubenswrapper[4740]: I1014 13:07:08.955606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-conf-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.955958 master-1 kubenswrapper[4740]: I1014 13:07:08.955679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-system-cni-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.955958 master-1 kubenswrapper[4740]: I1014 13:07:08.955747 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-netns\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.955958 master-1 kubenswrapper[4740]: I1014 13:07:08.955856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-cni-multus\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.955958 master-1 kubenswrapper[4740]: I1014 13:07:08.955894 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-socket-dir-parent\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956180 master-1 kubenswrapper[4740]: I1014 13:07:08.955985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-k8s-cni-cncf-io\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956180 master-1 kubenswrapper[4740]: I1014 13:07:08.956076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-cnibin\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956180 master-1 kubenswrapper[4740]: I1014 13:07:08.956110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-cni-binary-copy\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956180 master-1 kubenswrapper[4740]: I1014 13:07:08.956137 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-cni-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956180 master-1 kubenswrapper[4740]: I1014 13:07:08.956165 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-os-release\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956485 master-1 kubenswrapper[4740]: I1014 13:07:08.956196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-cni-bin\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956485 master-1 kubenswrapper[4740]: I1014 13:07:08.956271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-kubelet\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956485 master-1 kubenswrapper[4740]: I1014 13:07:08.956313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-hostroot\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956485 master-1 kubenswrapper[4740]: I1014 13:07:08.956355 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-multus-certs\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956485 master-1 kubenswrapper[4740]: I1014 13:07:08.956402 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-etc-kubernetes\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:08.956485 master-1 kubenswrapper[4740]: I1014 13:07:08.956446 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6qx\" (UniqueName: \"kubernetes.io/projected/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-kube-api-access-8d6qx\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.057662 master-1 kubenswrapper[4740]: I1014 13:07:09.057565 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-cni-bin\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.057844 master-1 kubenswrapper[4740]: I1014 13:07:09.057689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-kubelet\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.057844 master-1 kubenswrapper[4740]: I1014 13:07:09.057728 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-cni-bin\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058049 master-1 kubenswrapper[4740]: I1014 13:07:09.057850 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-hostroot\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058049 master-1 kubenswrapper[4740]: I1014 13:07:09.057917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-kubelet\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058223 master-1 kubenswrapper[4740]: I1014 13:07:09.058034 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-multus-certs\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058223 master-1 kubenswrapper[4740]: I1014 13:07:09.058093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-hostroot\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058223 master-1 kubenswrapper[4740]: I1014 13:07:09.058191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-multus-certs\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058546 master-1 kubenswrapper[4740]: I1014 13:07:09.058337 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-cni-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058686 master-1 kubenswrapper[4740]: I1014 13:07:09.058505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-cni-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058795 master-1 kubenswrapper[4740]: I1014 13:07:09.058707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-os-release\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.058906 master-1 kubenswrapper[4740]: I1014 13:07:09.058857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-os-release\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059000 master-1 kubenswrapper[4740]: I1014 13:07:09.058957 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-etc-kubernetes\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059291 master-1 kubenswrapper[4740]: I1014 13:07:09.059076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-etc-kubernetes\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059291 master-1 kubenswrapper[4740]: I1014 13:07:09.059161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6qx\" (UniqueName: \"kubernetes.io/projected/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-kube-api-access-8d6qx\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059558 master-1 kubenswrapper[4740]: I1014 13:07:09.059396 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-conf-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059672 master-1 kubenswrapper[4740]: I1014 13:07:09.059583 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-daemon-config\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059672 master-1 kubenswrapper[4740]: I1014 13:07:09.059510 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-conf-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.059931 master-1 kubenswrapper[4740]: I1014 13:07:09.059862 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-cni-multus\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060309 master-1 kubenswrapper[4740]: I1014 13:07:09.059635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-var-lib-cni-multus\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060420 master-1 kubenswrapper[4740]: I1014 13:07:09.060358 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-system-cni-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060420 master-1 kubenswrapper[4740]: I1014 13:07:09.060398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-netns\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060608 master-1 kubenswrapper[4740]: I1014 13:07:09.060429 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-cnibin\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060608 master-1 kubenswrapper[4740]: I1014 13:07:09.060459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-system-cni-dir\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060608 master-1 kubenswrapper[4740]: I1014 13:07:09.060461 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-cni-binary-copy\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060608 master-1 kubenswrapper[4740]: I1014 13:07:09.060526 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-socket-dir-parent\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.060608 master-1 kubenswrapper[4740]: I1014 13:07:09.060572 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-k8s-cni-cncf-io\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.061002 master-1 kubenswrapper[4740]: I1014 13:07:09.060650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-k8s-cni-cncf-io\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.061002 master-1 kubenswrapper[4740]: I1014 13:07:09.060654 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-host-run-netns\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.061002 master-1 kubenswrapper[4740]: I1014 13:07:09.060681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-cnibin\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.061002 master-1 kubenswrapper[4740]: I1014 13:07:09.060776 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-socket-dir-parent\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.061404 master-1 kubenswrapper[4740]: I1014 13:07:09.061099 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-multus-daemon-config\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.061624 master-1 kubenswrapper[4740]: I1014 13:07:09.061561 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-cni-binary-copy\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.073688 master-1 kubenswrapper[4740]: I1014 13:07:09.073615 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tn87t"] Oct 14 13:07:09.074314 master-1 kubenswrapper[4740]: I1014 13:07:09.074218 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.078316 master-1 kubenswrapper[4740]: I1014 13:07:09.078214 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 14 13:07:09.078316 master-1 kubenswrapper[4740]: I1014 13:07:09.078275 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Oct 14 13:07:09.081067 master-1 kubenswrapper[4740]: I1014 13:07:09.080997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6qx\" (UniqueName: \"kubernetes.io/projected/ec26f385-2a7f-4c05-b1cd-86d00a4808e3-kube-api-access-8d6qx\") pod \"multus-tq8hl\" (UID: \"ec26f385-2a7f-4c05-b1cd-86d00a4808e3\") " pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.162069 master-1 kubenswrapper[4740]: I1014 13:07:09.161885 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162069 master-1 kubenswrapper[4740]: I1014 13:07:09.161959 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cni-binary-copy\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162069 master-1 kubenswrapper[4740]: I1014 13:07:09.162009 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cnibin\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162492 master-1 kubenswrapper[4740]: I1014 13:07:09.162108 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162492 master-1 kubenswrapper[4740]: I1014 13:07:09.162268 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-system-cni-dir\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162492 master-1 kubenswrapper[4740]: I1014 13:07:09.162307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-os-release\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162492 master-1 kubenswrapper[4740]: I1014 13:07:09.162339 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqm8\" (UniqueName: \"kubernetes.io/projected/a52ab211-dfed-40b1-9d4f-e2b78edc6795-kube-api-access-dsqm8\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.162492 master-1 kubenswrapper[4740]: I1014 13:07:09.162376 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.186270 master-1 kubenswrapper[4740]: I1014 13:07:09.186171 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tq8hl" Oct 14 13:07:09.201912 master-1 kubenswrapper[4740]: W1014 13:07:09.201835 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec26f385_2a7f_4c05_b1cd_86d00a4808e3.slice/crio-e5795d6d869eecdcb1e42b6e724942a6b99cc66bf7edb142292843fefd39daa9 WatchSource:0}: Error finding container e5795d6d869eecdcb1e42b6e724942a6b99cc66bf7edb142292843fefd39daa9: Status 404 returned error can't find the container with id e5795d6d869eecdcb1e42b6e724942a6b99cc66bf7edb142292843fefd39daa9 Oct 14 13:07:09.263641 master-1 kubenswrapper[4740]: I1014 13:07:09.263542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cnibin\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.263641 master-1 kubenswrapper[4740]: I1014 13:07:09.263630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.263868 master-1 kubenswrapper[4740]: I1014 13:07:09.263683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-system-cni-dir\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.263868 master-1 kubenswrapper[4740]: I1014 13:07:09.263730 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-os-release\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.263868 master-1 kubenswrapper[4740]: I1014 13:07:09.263744 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cnibin\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.263868 master-1 kubenswrapper[4740]: I1014 13:07:09.263785 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsqm8\" (UniqueName: \"kubernetes.io/projected/a52ab211-dfed-40b1-9d4f-e2b78edc6795-kube-api-access-dsqm8\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.263868 master-1 kubenswrapper[4740]: I1014 13:07:09.263836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.264175 master-1 kubenswrapper[4740]: I1014 13:07:09.263893 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.264175 master-1 kubenswrapper[4740]: I1014 13:07:09.263938 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cni-binary-copy\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.264175 master-1 kubenswrapper[4740]: I1014 13:07:09.263952 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-os-release\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.264175 master-1 kubenswrapper[4740]: I1014 13:07:09.263840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-system-cni-dir\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.264441 master-1 kubenswrapper[4740]: I1014 13:07:09.264366 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a52ab211-dfed-40b1-9d4f-e2b78edc6795-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.265125 master-1 kubenswrapper[4740]: I1014 13:07:09.265051 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-whereabouts-configmap\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.265516 master-1 kubenswrapper[4740]: I1014 13:07:09.265464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.265659 master-1 kubenswrapper[4740]: I1014 13:07:09.265604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a52ab211-dfed-40b1-9d4f-e2b78edc6795-cni-binary-copy\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.293090 master-1 kubenswrapper[4740]: I1014 13:07:09.292988 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsqm8\" (UniqueName: \"kubernetes.io/projected/a52ab211-dfed-40b1-9d4f-e2b78edc6795-kube-api-access-dsqm8\") pod \"multus-additional-cni-plugins-tn87t\" (UID: \"a52ab211-dfed-40b1-9d4f-e2b78edc6795\") " pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.401011 master-1 kubenswrapper[4740]: I1014 13:07:09.400877 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tn87t" Oct 14 13:07:09.416778 master-1 kubenswrapper[4740]: W1014 13:07:09.416641 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda52ab211_dfed_40b1_9d4f_e2b78edc6795.slice/crio-f75d13d30f8d2e4e4c628b59b9f907152fc6e0311cef4144c9489d719086e17b WatchSource:0}: Error finding container f75d13d30f8d2e4e4c628b59b9f907152fc6e0311cef4144c9489d719086e17b: Status 404 returned error can't find the container with id f75d13d30f8d2e4e4c628b59b9f907152fc6e0311cef4144c9489d719086e17b Oct 14 13:07:09.853272 master-1 kubenswrapper[4740]: I1014 13:07:09.853192 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8l654"] Oct 14 13:07:09.853627 master-1 kubenswrapper[4740]: I1014 13:07:09.853598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:09.853821 master-1 kubenswrapper[4740]: E1014 13:07:09.853689 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:09.968543 master-1 kubenswrapper[4740]: I1014 13:07:09.968505 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:09.968543 master-1 kubenswrapper[4740]: I1014 13:07:09.968542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcdz\" (UniqueName: \"kubernetes.io/projected/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-kube-api-access-sqcdz\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:10.069695 master-1 kubenswrapper[4740]: I1014 13:07:10.069610 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:10.069695 master-1 kubenswrapper[4740]: I1014 13:07:10.069691 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqcdz\" (UniqueName: \"kubernetes.io/projected/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-kube-api-access-sqcdz\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:10.069947 master-1 kubenswrapper[4740]: E1014 13:07:10.069880 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:10.070036 master-1 kubenswrapper[4740]: E1014 13:07:10.070009 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:10.569972583 +0000 UTC m=+56.380261942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:10.101163 master-1 kubenswrapper[4740]: I1014 13:07:10.101063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqcdz\" (UniqueName: \"kubernetes.io/projected/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-kube-api-access-sqcdz\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:10.102952 master-1 kubenswrapper[4740]: I1014 13:07:10.102854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerStarted","Data":"f75d13d30f8d2e4e4c628b59b9f907152fc6e0311cef4144c9489d719086e17b"} Oct 14 13:07:10.104276 master-1 kubenswrapper[4740]: I1014 13:07:10.104128 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tq8hl" event={"ID":"ec26f385-2a7f-4c05-b1cd-86d00a4808e3","Type":"ContainerStarted","Data":"e5795d6d869eecdcb1e42b6e724942a6b99cc66bf7edb142292843fefd39daa9"} Oct 14 13:07:10.574683 master-1 kubenswrapper[4740]: I1014 13:07:10.574100 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:10.574683 master-1 kubenswrapper[4740]: E1014 13:07:10.574387 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:10.574683 master-1 kubenswrapper[4740]: E1014 13:07:10.574533 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:11.574498565 +0000 UTC m=+57.384787934 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:11.582417 master-1 kubenswrapper[4740]: I1014 13:07:11.582303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:11.582885 master-1 kubenswrapper[4740]: E1014 13:07:11.582491 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:11.582885 master-1 kubenswrapper[4740]: E1014 13:07:11.582587 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:13.582567355 +0000 UTC m=+59.392856674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:11.943070 master-1 kubenswrapper[4740]: I1014 13:07:11.943017 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:11.943244 master-1 kubenswrapper[4740]: E1014 13:07:11.943195 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:12.109047 master-1 kubenswrapper[4740]: I1014 13:07:12.108933 4740 generic.go:334] "Generic (PLEG): container finished" podID="a52ab211-dfed-40b1-9d4f-e2b78edc6795" containerID="7d9464379053d3a584e93871deaaa848678adf44bcb2e4d113eda82258891e75" exitCode=0 Oct 14 13:07:12.109047 master-1 kubenswrapper[4740]: I1014 13:07:12.108970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerDied","Data":"7d9464379053d3a584e93871deaaa848678adf44bcb2e4d113eda82258891e75"} Oct 14 13:07:13.597815 master-1 kubenswrapper[4740]: I1014 13:07:13.597729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:13.598553 master-1 kubenswrapper[4740]: E1014 13:07:13.597948 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:13.598553 master-1 kubenswrapper[4740]: E1014 13:07:13.598037 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:17.598013589 +0000 UTC m=+63.408302928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:13.943542 master-1 kubenswrapper[4740]: I1014 13:07:13.943353 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:13.943542 master-1 kubenswrapper[4740]: E1014 13:07:13.943501 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:15.943119 master-1 kubenswrapper[4740]: I1014 13:07:15.943072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:15.943758 master-1 kubenswrapper[4740]: E1014 13:07:15.943257 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:17.628814 master-1 kubenswrapper[4740]: I1014 13:07:17.628741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:17.629344 master-1 kubenswrapper[4740]: E1014 13:07:17.628870 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:17.629344 master-1 kubenswrapper[4740]: E1014 13:07:17.628918 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:25.628903456 +0000 UTC m=+71.439192785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:17.943722 master-1 kubenswrapper[4740]: I1014 13:07:17.943567 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:17.943722 master-1 kubenswrapper[4740]: E1014 13:07:17.943697 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:19.943141 master-1 kubenswrapper[4740]: I1014 13:07:19.943085 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:19.943717 master-1 kubenswrapper[4740]: E1014 13:07:19.943211 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:21.128086 master-1 kubenswrapper[4740]: I1014 13:07:21.127959 4740 generic.go:334] "Generic (PLEG): container finished" podID="a52ab211-dfed-40b1-9d4f-e2b78edc6795" containerID="2a9d5ef3ef9b405ea6fea772d5185da442972200d20fbed14668a2ffd3eefdad" exitCode=0 Oct 14 13:07:21.128086 master-1 kubenswrapper[4740]: I1014 13:07:21.128082 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerDied","Data":"2a9d5ef3ef9b405ea6fea772d5185da442972200d20fbed14668a2ffd3eefdad"} Oct 14 13:07:21.129698 master-1 kubenswrapper[4740]: I1014 13:07:21.129657 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tq8hl" event={"ID":"ec26f385-2a7f-4c05-b1cd-86d00a4808e3","Type":"ContainerStarted","Data":"642eeba4cf2a0c96a7d515d6a7ca37eefcb8b1c877bfce873ba5cf9bde2bebfe"} Oct 14 13:07:21.172903 master-1 kubenswrapper[4740]: I1014 13:07:21.172665 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tq8hl" podStartSLOduration=1.758813309 podStartE2EDuration="13.172589055s" podCreationTimestamp="2025-10-14 13:07:08 +0000 UTC" firstStartedPulling="2025-10-14 13:07:09.204091446 +0000 UTC m=+55.014380805" lastFinishedPulling="2025-10-14 13:07:20.617867202 +0000 UTC m=+66.428156551" observedRunningTime="2025-10-14 13:07:21.172575864 +0000 UTC m=+66.982865193" watchObservedRunningTime="2025-10-14 13:07:21.172589055 +0000 UTC m=+66.982878424" Oct 14 13:07:21.273489 master-1 kubenswrapper[4740]: I1014 13:07:21.273376 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj"] Oct 14 13:07:21.273820 master-1 kubenswrapper[4740]: I1014 13:07:21.273774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.276882 master-1 kubenswrapper[4740]: I1014 13:07:21.276827 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 14 13:07:21.277335 master-1 kubenswrapper[4740]: I1014 13:07:21.277057 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 14 13:07:21.277335 master-1 kubenswrapper[4740]: I1014 13:07:21.277184 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 14 13:07:21.277335 master-1 kubenswrapper[4740]: I1014 13:07:21.277274 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 14 13:07:21.277335 master-1 kubenswrapper[4740]: I1014 13:07:21.277273 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 14 13:07:21.454679 master-1 kubenswrapper[4740]: I1014 13:07:21.454270 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.454679 master-1 kubenswrapper[4740]: I1014 13:07:21.454348 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.454679 master-1 kubenswrapper[4740]: I1014 13:07:21.454501 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-env-overrides\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.454679 master-1 kubenswrapper[4740]: I1014 13:07:21.454547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpvpc\" (UniqueName: \"kubernetes.io/projected/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-kube-api-access-dpvpc\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.478121 master-1 kubenswrapper[4740]: I1014 13:07:21.477982 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g2f76"] Oct 14 13:07:21.478853 master-1 kubenswrapper[4740]: I1014 13:07:21.478820 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.482209 master-1 kubenswrapper[4740]: I1014 13:07:21.482147 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 14 13:07:21.482528 master-1 kubenswrapper[4740]: I1014 13:07:21.482471 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 14 13:07:21.555125 master-1 kubenswrapper[4740]: I1014 13:07:21.555019 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-env-overrides\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.555125 master-1 kubenswrapper[4740]: I1014 13:07:21.555087 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpvpc\" (UniqueName: \"kubernetes.io/projected/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-kube-api-access-dpvpc\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.555553 master-1 kubenswrapper[4740]: I1014 13:07:21.555151 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.555553 master-1 kubenswrapper[4740]: I1014 13:07:21.555186 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.556043 master-1 kubenswrapper[4740]: I1014 13:07:21.555976 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-env-overrides\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.556563 master-1 kubenswrapper[4740]: I1014 13:07:21.556499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-ovnkube-config\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.561615 master-1 kubenswrapper[4740]: I1014 13:07:21.561554 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.579597 master-1 kubenswrapper[4740]: I1014 13:07:21.579503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpvpc\" (UniqueName: \"kubernetes.io/projected/1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd-kube-api-access-dpvpc\") pod \"ovnkube-control-plane-864d695c77-zrhxj\" (UID: \"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.592184 master-1 kubenswrapper[4740]: I1014 13:07:21.592082 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" Oct 14 13:07:21.656144 master-1 kubenswrapper[4740]: I1014 13:07:21.655819 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-var-lib-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656423 master-1 kubenswrapper[4740]: I1014 13:07:21.656255 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-netd\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656423 master-1 kubenswrapper[4740]: I1014 13:07:21.656353 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-log-socket\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656423 master-1 kubenswrapper[4740]: I1014 13:07:21.656398 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-script-lib\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656622 master-1 kubenswrapper[4740]: I1014 13:07:21.656431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-netns\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656622 master-1 kubenswrapper[4740]: I1014 13:07:21.656465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-node-log\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656622 master-1 kubenswrapper[4740]: I1014 13:07:21.656500 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-etc-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656622 master-1 kubenswrapper[4740]: I1014 13:07:21.656529 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-systemd-units\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656622 master-1 kubenswrapper[4740]: I1014 13:07:21.656557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-slash\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656917 master-1 kubenswrapper[4740]: I1014 13:07:21.656587 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-kubelet\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656917 master-1 kubenswrapper[4740]: I1014 13:07:21.656700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7fkv\" (UniqueName: \"kubernetes.io/projected/9b565ca7-6b58-4c77-9be7-495cc929fbad-kube-api-access-s7fkv\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656917 master-1 kubenswrapper[4740]: I1014 13:07:21.656734 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-ovn-kubernetes\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656917 master-1 kubenswrapper[4740]: I1014 13:07:21.656801 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656917 master-1 kubenswrapper[4740]: I1014 13:07:21.656830 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-config\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.656917 master-1 kubenswrapper[4740]: I1014 13:07:21.656862 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-ovn\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.657310 master-1 kubenswrapper[4740]: I1014 13:07:21.656974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-systemd\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.657310 master-1 kubenswrapper[4740]: I1014 13:07:21.657056 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-env-overrides\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.657310 master-1 kubenswrapper[4740]: I1014 13:07:21.657157 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-bin\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.657310 master-1 kubenswrapper[4740]: I1014 13:07:21.657220 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.657310 master-1 kubenswrapper[4740]: I1014 13:07:21.657309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovn-node-metrics-cert\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.757908 master-1 kubenswrapper[4740]: I1014 13:07:21.757823 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-node-log\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.757908 master-1 kubenswrapper[4740]: I1014 13:07:21.757895 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-etc-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.757929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-slash\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.757963 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-kubelet\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.757996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-systemd-units\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.758028 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7fkv\" (UniqueName: \"kubernetes.io/projected/9b565ca7-6b58-4c77-9be7-495cc929fbad-kube-api-access-s7fkv\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.758077 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-ovn-kubernetes\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.758108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.758161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-config\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.758190 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-ovn\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758269 master-1 kubenswrapper[4740]: I1014 13:07:21.758221 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-systemd\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758290 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-env-overrides\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758338 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-bin\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758372 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovn-node-metrics-cert\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758429 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-var-lib-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758460 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-netd\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-log-socket\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758543 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-script-lib\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758577 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-netns\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758702 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-netns\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.758777 master-1 kubenswrapper[4740]: I1014 13:07:21.758774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-node-log\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.759384 master-1 kubenswrapper[4740]: I1014 13:07:21.758835 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-etc-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.759384 master-1 kubenswrapper[4740]: I1014 13:07:21.758923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-slash\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.759384 master-1 kubenswrapper[4740]: I1014 13:07:21.758995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-kubelet\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.759384 master-1 kubenswrapper[4740]: I1014 13:07:21.759062 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-systemd-units\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.760764 master-1 kubenswrapper[4740]: I1014 13:07:21.759603 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-ovn-kubernetes\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.760764 master-1 kubenswrapper[4740]: I1014 13:07:21.760021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-bin\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.760764 master-1 kubenswrapper[4740]: I1014 13:07:21.760144 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-ovn\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.760764 master-1 kubenswrapper[4740]: I1014 13:07:21.760148 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-var-lib-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.760764 master-1 kubenswrapper[4740]: I1014 13:07:21.760183 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-openvswitch\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.760764 master-1 kubenswrapper[4740]: I1014 13:07:21.760218 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.761416 master-1 kubenswrapper[4740]: I1014 13:07:21.761067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-config\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.761416 master-1 kubenswrapper[4740]: I1014 13:07:21.761340 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-systemd\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.761581 master-1 kubenswrapper[4740]: I1014 13:07:21.761206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-netd\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.763200 master-1 kubenswrapper[4740]: I1014 13:07:21.762426 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-env-overrides\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.763200 master-1 kubenswrapper[4740]: I1014 13:07:21.763138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-script-lib\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.763859 master-1 kubenswrapper[4740]: I1014 13:07:21.763631 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-log-socket\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.766110 master-1 kubenswrapper[4740]: I1014 13:07:21.766057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovn-node-metrics-cert\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.787292 master-1 kubenswrapper[4740]: I1014 13:07:21.787163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7fkv\" (UniqueName: \"kubernetes.io/projected/9b565ca7-6b58-4c77-9be7-495cc929fbad-kube-api-access-s7fkv\") pod \"ovnkube-node-g2f76\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.796520 master-1 kubenswrapper[4740]: I1014 13:07:21.796448 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:21.815414 master-1 kubenswrapper[4740]: W1014 13:07:21.815337 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b565ca7_6b58_4c77_9be7_495cc929fbad.slice/crio-0bef98c8075400ff6c25edc3bb3e77e22c3a5efdc43c9bd5abf9c2e2b3b8fd29 WatchSource:0}: Error finding container 0bef98c8075400ff6c25edc3bb3e77e22c3a5efdc43c9bd5abf9c2e2b3b8fd29: Status 404 returned error can't find the container with id 0bef98c8075400ff6c25edc3bb3e77e22c3a5efdc43c9bd5abf9c2e2b3b8fd29 Oct 14 13:07:21.943391 master-1 kubenswrapper[4740]: I1014 13:07:21.943214 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:21.943817 master-1 kubenswrapper[4740]: E1014 13:07:21.943407 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:22.134078 master-1 kubenswrapper[4740]: I1014 13:07:22.133823 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"0bef98c8075400ff6c25edc3bb3e77e22c3a5efdc43c9bd5abf9c2e2b3b8fd29"} Oct 14 13:07:22.136414 master-1 kubenswrapper[4740]: I1014 13:07:22.136332 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" event={"ID":"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd","Type":"ContainerStarted","Data":"701ecf2a2b49e2a931d8a8e5769a2d32821587f1a52e27a8095c84b118a94099"} Oct 14 13:07:22.136519 master-1 kubenswrapper[4740]: I1014 13:07:22.136420 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" event={"ID":"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd","Type":"ContainerStarted","Data":"eba692bb9e7d2397f10269f949cbd357a3e79704cbeeac1abb7f955f23350071"} Oct 14 13:07:23.139724 master-1 kubenswrapper[4740]: I1014 13:07:23.139682 4740 generic.go:334] "Generic (PLEG): container finished" podID="a52ab211-dfed-40b1-9d4f-e2b78edc6795" containerID="fd7f164f6b2c835e98c3864b318ecf2a741ffdde4cd9ea2152cc50a54d43fb20" exitCode=0 Oct 14 13:07:23.139724 master-1 kubenswrapper[4740]: I1014 13:07:23.139719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerDied","Data":"fd7f164f6b2c835e98c3864b318ecf2a741ffdde4cd9ea2152cc50a54d43fb20"} Oct 14 13:07:23.942763 master-1 kubenswrapper[4740]: I1014 13:07:23.942722 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:23.942938 master-1 kubenswrapper[4740]: E1014 13:07:23.942849 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:24.464072 master-1 kubenswrapper[4740]: I1014 13:07:24.463966 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-sndvg"] Oct 14 13:07:24.465273 master-1 kubenswrapper[4740]: I1014 13:07:24.464313 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:24.465273 master-1 kubenswrapper[4740]: E1014 13:07:24.464378 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:24.584192 master-1 kubenswrapper[4740]: I1014 13:07:24.584124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:24.684846 master-1 kubenswrapper[4740]: I1014 13:07:24.684778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:24.701278 master-1 kubenswrapper[4740]: E1014 13:07:24.701106 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:24.701278 master-1 kubenswrapper[4740]: E1014 13:07:24.701144 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:24.701278 master-1 kubenswrapper[4740]: E1014 13:07:24.701158 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:24.701278 master-1 kubenswrapper[4740]: E1014 13:07:24.701239 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:25.201204394 +0000 UTC m=+71.011493733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:25.288129 master-1 kubenswrapper[4740]: I1014 13:07:25.288023 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:25.288407 master-1 kubenswrapper[4740]: E1014 13:07:25.288366 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:25.288558 master-1 kubenswrapper[4740]: E1014 13:07:25.288418 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:25.288558 master-1 kubenswrapper[4740]: E1014 13:07:25.288446 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:25.288558 master-1 kubenswrapper[4740]: E1014 13:07:25.288540 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:26.288510144 +0000 UTC m=+72.098799513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:25.690385 master-1 kubenswrapper[4740]: I1014 13:07:25.690262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:25.690895 master-1 kubenswrapper[4740]: E1014 13:07:25.690405 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:25.690895 master-1 kubenswrapper[4740]: E1014 13:07:25.690468 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:41.690451973 +0000 UTC m=+87.500741312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:25.944012 master-1 kubenswrapper[4740]: I1014 13:07:25.943583 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:25.944012 master-1 kubenswrapper[4740]: I1014 13:07:25.943634 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:25.944012 master-1 kubenswrapper[4740]: E1014 13:07:25.943698 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:25.944012 master-1 kubenswrapper[4740]: E1014 13:07:25.943782 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:26.293871 master-1 kubenswrapper[4740]: I1014 13:07:26.293815 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:26.294095 master-1 kubenswrapper[4740]: E1014 13:07:26.294066 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:26.294250 master-1 kubenswrapper[4740]: E1014 13:07:26.294203 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:26.294317 master-1 kubenswrapper[4740]: E1014 13:07:26.294295 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:26.294403 master-1 kubenswrapper[4740]: E1014 13:07:26.294375 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:28.294352279 +0000 UTC m=+74.104641648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:27.058881 master-1 kubenswrapper[4740]: I1014 13:07:27.058359 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-rsr2v"] Oct 14 13:07:27.058881 master-1 kubenswrapper[4740]: I1014 13:07:27.058637 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.061974 master-1 kubenswrapper[4740]: I1014 13:07:27.061766 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 14 13:07:27.061974 master-1 kubenswrapper[4740]: I1014 13:07:27.061797 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 14 13:07:27.061974 master-1 kubenswrapper[4740]: I1014 13:07:27.061817 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 14 13:07:27.061974 master-1 kubenswrapper[4740]: I1014 13:07:27.061823 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 14 13:07:27.063960 master-1 kubenswrapper[4740]: I1014 13:07:27.063924 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 14 13:07:27.152149 master-1 kubenswrapper[4740]: I1014 13:07:27.152090 4740 generic.go:334] "Generic (PLEG): container finished" podID="a52ab211-dfed-40b1-9d4f-e2b78edc6795" containerID="0e990c73474f4e2ab3b0bde20801a251faffe40a367702c2fe9d443e35dfe7df" exitCode=0 Oct 14 13:07:27.152149 master-1 kubenswrapper[4740]: I1014 13:07:27.152151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerDied","Data":"0e990c73474f4e2ab3b0bde20801a251faffe40a367702c2fe9d443e35dfe7df"} Oct 14 13:07:27.201050 master-1 kubenswrapper[4740]: I1014 13:07:27.201013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1a39f44d-8daa-4693-858f-6c0d3c8caa23-ovnkube-identity-cm\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.201050 master-1 kubenswrapper[4740]: I1014 13:07:27.201051 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hx5\" (UniqueName: \"kubernetes.io/projected/1a39f44d-8daa-4693-858f-6c0d3c8caa23-kube-api-access-54hx5\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.201151 master-1 kubenswrapper[4740]: I1014 13:07:27.201070 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a39f44d-8daa-4693-858f-6c0d3c8caa23-webhook-cert\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.201151 master-1 kubenswrapper[4740]: I1014 13:07:27.201110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a39f44d-8daa-4693-858f-6c0d3c8caa23-env-overrides\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.302429 master-1 kubenswrapper[4740]: I1014 13:07:27.302386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hx5\" (UniqueName: \"kubernetes.io/projected/1a39f44d-8daa-4693-858f-6c0d3c8caa23-kube-api-access-54hx5\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.302549 master-1 kubenswrapper[4740]: I1014 13:07:27.302447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a39f44d-8daa-4693-858f-6c0d3c8caa23-webhook-cert\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.302602 master-1 kubenswrapper[4740]: I1014 13:07:27.302581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a39f44d-8daa-4693-858f-6c0d3c8caa23-env-overrides\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.302639 master-1 kubenswrapper[4740]: I1014 13:07:27.302615 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1a39f44d-8daa-4693-858f-6c0d3c8caa23-ovnkube-identity-cm\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.303857 master-1 kubenswrapper[4740]: I1014 13:07:27.303812 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a39f44d-8daa-4693-858f-6c0d3c8caa23-env-overrides\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.304099 master-1 kubenswrapper[4740]: I1014 13:07:27.304072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/1a39f44d-8daa-4693-858f-6c0d3c8caa23-ovnkube-identity-cm\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.307876 master-1 kubenswrapper[4740]: I1014 13:07:27.307829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a39f44d-8daa-4693-858f-6c0d3c8caa23-webhook-cert\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.327612 master-1 kubenswrapper[4740]: I1014 13:07:27.327545 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hx5\" (UniqueName: \"kubernetes.io/projected/1a39f44d-8daa-4693-858f-6c0d3c8caa23-kube-api-access-54hx5\") pod \"network-node-identity-rsr2v\" (UID: \"1a39f44d-8daa-4693-858f-6c0d3c8caa23\") " pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.370273 master-1 kubenswrapper[4740]: I1014 13:07:27.370220 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rsr2v" Oct 14 13:07:27.383980 master-1 kubenswrapper[4740]: W1014 13:07:27.383948 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a39f44d_8daa_4693_858f_6c0d3c8caa23.slice/crio-7cf8dcbcb9076a026f67bd85bdff7359cc03bf23de1bbd8f7118f0861dff0f89 WatchSource:0}: Error finding container 7cf8dcbcb9076a026f67bd85bdff7359cc03bf23de1bbd8f7118f0861dff0f89: Status 404 returned error can't find the container with id 7cf8dcbcb9076a026f67bd85bdff7359cc03bf23de1bbd8f7118f0861dff0f89 Oct 14 13:07:27.943538 master-1 kubenswrapper[4740]: I1014 13:07:27.943484 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:27.943748 master-1 kubenswrapper[4740]: I1014 13:07:27.943496 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:27.943748 master-1 kubenswrapper[4740]: E1014 13:07:27.943616 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:27.943748 master-1 kubenswrapper[4740]: E1014 13:07:27.943721 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:28.155120 master-1 kubenswrapper[4740]: I1014 13:07:28.155044 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rsr2v" event={"ID":"1a39f44d-8daa-4693-858f-6c0d3c8caa23","Type":"ContainerStarted","Data":"7cf8dcbcb9076a026f67bd85bdff7359cc03bf23de1bbd8f7118f0861dff0f89"} Oct 14 13:07:28.311748 master-1 kubenswrapper[4740]: I1014 13:07:28.311698 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:28.311952 master-1 kubenswrapper[4740]: E1014 13:07:28.311886 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:28.311952 master-1 kubenswrapper[4740]: E1014 13:07:28.311913 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:28.311952 master-1 kubenswrapper[4740]: E1014 13:07:28.311927 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:28.312038 master-1 kubenswrapper[4740]: E1014 13:07:28.311985 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:32.311969505 +0000 UTC m=+78.122258844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:29.943205 master-1 kubenswrapper[4740]: I1014 13:07:29.942717 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:29.943205 master-1 kubenswrapper[4740]: I1014 13:07:29.942719 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:29.943205 master-1 kubenswrapper[4740]: E1014 13:07:29.942848 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:29.943205 master-1 kubenswrapper[4740]: E1014 13:07:29.942965 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:31.943181 master-1 kubenswrapper[4740]: I1014 13:07:31.942815 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:31.943181 master-1 kubenswrapper[4740]: I1014 13:07:31.942896 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:31.943181 master-1 kubenswrapper[4740]: E1014 13:07:31.942928 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:31.943181 master-1 kubenswrapper[4740]: E1014 13:07:31.943115 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:32.344121 master-1 kubenswrapper[4740]: I1014 13:07:32.344061 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:32.344397 master-1 kubenswrapper[4740]: E1014 13:07:32.344204 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:32.344397 master-1 kubenswrapper[4740]: E1014 13:07:32.344222 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:32.344397 master-1 kubenswrapper[4740]: E1014 13:07:32.344261 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:32.344397 master-1 kubenswrapper[4740]: E1014 13:07:32.344310 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:40.344297148 +0000 UTC m=+86.154586477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:33.943791 master-1 kubenswrapper[4740]: I1014 13:07:33.943728 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:33.944398 master-1 kubenswrapper[4740]: I1014 13:07:33.943831 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:33.944398 master-1 kubenswrapper[4740]: E1014 13:07:33.943904 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:33.944398 master-1 kubenswrapper[4740]: E1014 13:07:33.943976 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:35.943447 master-1 kubenswrapper[4740]: I1014 13:07:35.943407 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:35.944220 master-1 kubenswrapper[4740]: I1014 13:07:35.943485 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:35.944220 master-1 kubenswrapper[4740]: E1014 13:07:35.943508 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:35.944220 master-1 kubenswrapper[4740]: E1014 13:07:35.943652 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:37.942935 master-1 kubenswrapper[4740]: I1014 13:07:37.942879 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:37.943571 master-1 kubenswrapper[4740]: E1014 13:07:37.943031 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:37.943571 master-1 kubenswrapper[4740]: I1014 13:07:37.943493 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:37.943710 master-1 kubenswrapper[4740]: E1014 13:07:37.943600 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:38.181181 master-1 kubenswrapper[4740]: I1014 13:07:38.180996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rsr2v" event={"ID":"1a39f44d-8daa-4693-858f-6c0d3c8caa23","Type":"ContainerStarted","Data":"7aaf8d184cd4c83137ec24973e9adabc69a8422292040fea92ba409f606f8cfe"} Oct 14 13:07:38.181181 master-1 kubenswrapper[4740]: I1014 13:07:38.181084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rsr2v" event={"ID":"1a39f44d-8daa-4693-858f-6c0d3c8caa23","Type":"ContainerStarted","Data":"b86772826ca82f85c24e7342e614b4937987851f3beab9f5844cb5bb8adb184b"} Oct 14 13:07:38.185997 master-1 kubenswrapper[4740]: I1014 13:07:38.185923 4740 generic.go:334] "Generic (PLEG): container finished" podID="a52ab211-dfed-40b1-9d4f-e2b78edc6795" containerID="cbd447c48eeb8d4485025d0778098385fc4090bc4bb95a9de6523eb2b7076a00" exitCode=0 Oct 14 13:07:38.186343 master-1 kubenswrapper[4740]: I1014 13:07:38.186270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerDied","Data":"cbd447c48eeb8d4485025d0778098385fc4090bc4bb95a9de6523eb2b7076a00"} Oct 14 13:07:38.188579 master-1 kubenswrapper[4740]: I1014 13:07:38.188418 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" exitCode=0 Oct 14 13:07:38.188579 master-1 kubenswrapper[4740]: I1014 13:07:38.188514 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} Oct 14 13:07:38.191935 master-1 kubenswrapper[4740]: I1014 13:07:38.191865 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" event={"ID":"1f4848ce-ac6d-4d7c-8a6d-5038d4d975dd","Type":"ContainerStarted","Data":"6ff808c1c104400dc06b19d5166313b81c903c0cc0cf4f4f743d1f76d1b4025c"} Oct 14 13:07:38.197563 master-1 kubenswrapper[4740]: I1014 13:07:38.197481 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-rsr2v" podStartSLOduration=0.910478252 podStartE2EDuration="11.197463106s" podCreationTimestamp="2025-10-14 13:07:27 +0000 UTC" firstStartedPulling="2025-10-14 13:07:27.387072624 +0000 UTC m=+73.197361953" lastFinishedPulling="2025-10-14 13:07:37.674057478 +0000 UTC m=+83.484346807" observedRunningTime="2025-10-14 13:07:38.196586924 +0000 UTC m=+84.006876283" watchObservedRunningTime="2025-10-14 13:07:38.197463106 +0000 UTC m=+84.007752465" Oct 14 13:07:38.234112 master-1 kubenswrapper[4740]: I1014 13:07:38.233991 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj" podStartSLOduration=1.447605333 podStartE2EDuration="17.233960726s" podCreationTimestamp="2025-10-14 13:07:21 +0000 UTC" firstStartedPulling="2025-10-14 13:07:21.871216772 +0000 UTC m=+67.681506141" lastFinishedPulling="2025-10-14 13:07:37.657572195 +0000 UTC m=+83.467861534" observedRunningTime="2025-10-14 13:07:38.233897254 +0000 UTC m=+84.044186613" watchObservedRunningTime="2025-10-14 13:07:38.233960726 +0000 UTC m=+84.044250085" Oct 14 13:07:39.202109 master-1 kubenswrapper[4740]: I1014 13:07:39.201573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} Oct 14 13:07:39.202109 master-1 kubenswrapper[4740]: I1014 13:07:39.201996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} Oct 14 13:07:39.202109 master-1 kubenswrapper[4740]: I1014 13:07:39.202018 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} Oct 14 13:07:39.202109 master-1 kubenswrapper[4740]: I1014 13:07:39.202036 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} Oct 14 13:07:39.202109 master-1 kubenswrapper[4740]: I1014 13:07:39.202057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} Oct 14 13:07:39.202109 master-1 kubenswrapper[4740]: I1014 13:07:39.202074 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} Oct 14 13:07:39.207947 master-1 kubenswrapper[4740]: I1014 13:07:39.207893 4740 generic.go:334] "Generic (PLEG): container finished" podID="a52ab211-dfed-40b1-9d4f-e2b78edc6795" containerID="246905524b2c4c275308ee7fb20c722a550d2b292a3c0ee99314022444328990" exitCode=0 Oct 14 13:07:39.208028 master-1 kubenswrapper[4740]: I1014 13:07:39.207966 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerDied","Data":"246905524b2c4c275308ee7fb20c722a550d2b292a3c0ee99314022444328990"} Oct 14 13:07:39.943949 master-1 kubenswrapper[4740]: I1014 13:07:39.943482 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:39.944307 master-1 kubenswrapper[4740]: I1014 13:07:39.943574 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:39.944307 master-1 kubenswrapper[4740]: E1014 13:07:39.944047 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:39.944307 master-1 kubenswrapper[4740]: E1014 13:07:39.944119 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:40.218292 master-1 kubenswrapper[4740]: I1014 13:07:40.218115 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tn87t" event={"ID":"a52ab211-dfed-40b1-9d4f-e2b78edc6795","Type":"ContainerStarted","Data":"e3324971f6a647bf3e3739df58c0104f5a883afd9041df56f87cbec298581515"} Oct 14 13:07:40.243155 master-1 kubenswrapper[4740]: I1014 13:07:40.243010 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tn87t" podStartSLOduration=3.067705213 podStartE2EDuration="31.242987063s" podCreationTimestamp="2025-10-14 13:07:09 +0000 UTC" firstStartedPulling="2025-10-14 13:07:09.419644988 +0000 UTC m=+55.229934357" lastFinishedPulling="2025-10-14 13:07:37.594926868 +0000 UTC m=+83.405216207" observedRunningTime="2025-10-14 13:07:40.242043079 +0000 UTC m=+86.052332448" watchObservedRunningTime="2025-10-14 13:07:40.242987063 +0000 UTC m=+86.053276432" Oct 14 13:07:40.413050 master-1 kubenswrapper[4740]: I1014 13:07:40.412897 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:40.413397 master-1 kubenswrapper[4740]: E1014 13:07:40.413166 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:40.413397 master-1 kubenswrapper[4740]: E1014 13:07:40.413216 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:40.413397 master-1 kubenswrapper[4740]: E1014 13:07:40.413261 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:40.413397 master-1 kubenswrapper[4740]: E1014 13:07:40.413327 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:07:56.413307578 +0000 UTC m=+102.223596917 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:41.229328 master-1 kubenswrapper[4740]: I1014 13:07:41.229164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} Oct 14 13:07:41.723643 master-1 kubenswrapper[4740]: I1014 13:07:41.723538 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:41.723847 master-1 kubenswrapper[4740]: E1014 13:07:41.723721 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:41.723847 master-1 kubenswrapper[4740]: E1014 13:07:41.723819 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.723793084 +0000 UTC m=+119.534082443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:07:41.943426 master-1 kubenswrapper[4740]: I1014 13:07:41.943340 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:41.944048 master-1 kubenswrapper[4740]: I1014 13:07:41.943550 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:41.944180 master-1 kubenswrapper[4740]: E1014 13:07:41.944138 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:41.944448 master-1 kubenswrapper[4740]: E1014 13:07:41.944353 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:43.943326 master-1 kubenswrapper[4740]: I1014 13:07:43.943194 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:43.944220 master-1 kubenswrapper[4740]: E1014 13:07:43.943399 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:43.944220 master-1 kubenswrapper[4740]: I1014 13:07:43.943610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:43.944220 master-1 kubenswrapper[4740]: E1014 13:07:43.943848 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:44.245436 master-1 kubenswrapper[4740]: I1014 13:07:44.245324 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerStarted","Data":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} Oct 14 13:07:44.245871 master-1 kubenswrapper[4740]: I1014 13:07:44.245804 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:44.245953 master-1 kubenswrapper[4740]: I1014 13:07:44.245883 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:44.284871 master-1 kubenswrapper[4740]: I1014 13:07:44.284704 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podStartSLOduration=7.4422791870000005 podStartE2EDuration="23.284673533s" podCreationTimestamp="2025-10-14 13:07:21 +0000 UTC" firstStartedPulling="2025-10-14 13:07:21.819751312 +0000 UTC m=+67.630040671" lastFinishedPulling="2025-10-14 13:07:37.662145688 +0000 UTC m=+83.472435017" observedRunningTime="2025-10-14 13:07:44.282980642 +0000 UTC m=+90.093270011" watchObservedRunningTime="2025-10-14 13:07:44.284673533 +0000 UTC m=+90.094962912" Oct 14 13:07:45.249540 master-1 kubenswrapper[4740]: I1014 13:07:45.249389 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:45.710996 master-1 kubenswrapper[4740]: I1014 13:07:45.709731 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-sndvg"] Oct 14 13:07:45.710996 master-1 kubenswrapper[4740]: I1014 13:07:45.709979 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:45.710996 master-1 kubenswrapper[4740]: E1014 13:07:45.710166 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:45.710996 master-1 kubenswrapper[4740]: I1014 13:07:45.710838 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8l654"] Oct 14 13:07:45.711443 master-1 kubenswrapper[4740]: I1014 13:07:45.711028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:45.711443 master-1 kubenswrapper[4740]: E1014 13:07:45.711212 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:46.912997 master-1 kubenswrapper[4740]: I1014 13:07:46.912941 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g2f76"] Oct 14 13:07:46.943030 master-1 kubenswrapper[4740]: I1014 13:07:46.942935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:46.943179 master-1 kubenswrapper[4740]: E1014 13:07:46.943125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:47.256310 master-1 kubenswrapper[4740]: I1014 13:07:47.256207 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-controller" containerID="cri-o://bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" gracePeriod=30 Oct 14 13:07:47.256310 master-1 kubenswrapper[4740]: I1014 13:07:47.256265 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" gracePeriod=30 Oct 14 13:07:47.256745 master-1 kubenswrapper[4740]: I1014 13:07:47.256317 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="nbdb" containerID="cri-o://19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" gracePeriod=30 Oct 14 13:07:47.256745 master-1 kubenswrapper[4740]: I1014 13:07:47.256351 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="northd" containerID="cri-o://460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" gracePeriod=30 Oct 14 13:07:47.256745 master-1 kubenswrapper[4740]: I1014 13:07:47.256209 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="sbdb" containerID="cri-o://c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" gracePeriod=30 Oct 14 13:07:47.256745 master-1 kubenswrapper[4740]: I1014 13:07:47.256465 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-acl-logging" containerID="cri-o://de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" gracePeriod=30 Oct 14 13:07:47.256745 master-1 kubenswrapper[4740]: I1014 13:07:47.256442 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-node" containerID="cri-o://3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" gracePeriod=30 Oct 14 13:07:47.286692 master-1 kubenswrapper[4740]: I1014 13:07:47.286571 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovnkube-controller" containerID="cri-o://afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" gracePeriod=30 Oct 14 13:07:47.834777 master-1 kubenswrapper[4740]: I1014 13:07:47.834316 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/ovnkube-controller/0.log" Oct 14 13:07:47.837960 master-1 kubenswrapper[4740]: I1014 13:07:47.837913 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/kube-rbac-proxy-ovn-metrics/0.log" Oct 14 13:07:47.839010 master-1 kubenswrapper[4740]: I1014 13:07:47.838972 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/kube-rbac-proxy-node/0.log" Oct 14 13:07:47.840166 master-1 kubenswrapper[4740]: I1014 13:07:47.840094 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/ovn-acl-logging/0.log" Oct 14 13:07:47.841382 master-1 kubenswrapper[4740]: I1014 13:07:47.841351 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/ovn-controller/0.log" Oct 14 13:07:47.842207 master-1 kubenswrapper[4740]: I1014 13:07:47.842172 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:47.897528 master-1 kubenswrapper[4740]: I1014 13:07:47.897472 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvfnh"] Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897624 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-node" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897645 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-node" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897663 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kubecfg-setup" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897675 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kubecfg-setup" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897693 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="sbdb" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897705 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="sbdb" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897718 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897730 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897743 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="northd" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897755 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="northd" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897768 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-controller" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897780 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-controller" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897794 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-acl-logging" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897806 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-acl-logging" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897819 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="nbdb" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897831 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="nbdb" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: E1014 13:07:47.897844 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovnkube-controller" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897856 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovnkube-controller" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897902 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-controller" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897917 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-node" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897929 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="northd" Oct 14 13:07:47.897919 master-1 kubenswrapper[4740]: I1014 13:07:47.897941 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="nbdb" Oct 14 13:07:47.899803 master-1 kubenswrapper[4740]: I1014 13:07:47.897954 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovn-acl-logging" Oct 14 13:07:47.899803 master-1 kubenswrapper[4740]: I1014 13:07:47.897968 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 13:07:47.899803 master-1 kubenswrapper[4740]: I1014 13:07:47.897980 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="sbdb" Oct 14 13:07:47.899803 master-1 kubenswrapper[4740]: I1014 13:07:47.897993 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerName="ovnkube-controller" Oct 14 13:07:47.899803 master-1 kubenswrapper[4740]: I1014 13:07:47.898812 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:47.943600 master-1 kubenswrapper[4740]: I1014 13:07:47.943555 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:47.944364 master-1 kubenswrapper[4740]: E1014 13:07:47.943755 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:48.005383 master-1 kubenswrapper[4740]: I1014 13:07:48.005283 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-netd\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005383 master-1 kubenswrapper[4740]: I1014 13:07:48.005340 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-etc-openvswitch\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005383 master-1 kubenswrapper[4740]: I1014 13:07:48.005378 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-node-log\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005412 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005445 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-ovn-kubernetes\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005487 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-config\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005585 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-systemd\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005439 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005491 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-node-log" (OuterVolumeSpecName: "node-log") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-var-lib-openvswitch\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005592 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005741 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005806 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.005949 master-1 kubenswrapper[4740]: I1014 13:07:48.005911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-script-lib\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006037 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-kubelet\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006084 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006187 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006212 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-ovn\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006319 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006529 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-netns\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006721 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-slash\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006771 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7fkv\" (UniqueName: \"kubernetes.io/projected/9b565ca7-6b58-4c77-9be7-495cc929fbad-kube-api-access-s7fkv\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006802 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-openvswitch\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006832 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-bin\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006835 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-slash" (OuterVolumeSpecName: "host-slash") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006860 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-systemd-units\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006893 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-log-socket\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006895 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.007223 master-1 kubenswrapper[4740]: I1014 13:07:48.006928 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-env-overrides\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.006903 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.006934 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-log-socket" (OuterVolumeSpecName: "log-socket") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.006962 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovn-node-metrics-cert\") pod \"9b565ca7-6b58-4c77-9be7-495cc929fbad\" (UID: \"9b565ca7-6b58-4c77-9be7-495cc929fbad\") " Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.006991 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-etc-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007151 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-cni-netd\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-kubelet\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007415 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovnkube-config\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007602 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007677 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-systemd\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007729 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-systemd-units\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007820 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-slash\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.007921 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovnkube-script-lib\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.008448 master-1 kubenswrapper[4740]: I1014 13:07:48.008071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-ovn\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008202 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-log-socket\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008294 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgr5f\" (UniqueName: \"kubernetes.io/projected/4e6bd500-0de9-4c62-84f1-924e0ba066bb-kube-api-access-hgr5f\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008464 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-cni-bin\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008559 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-run-netns\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008623 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-var-lib-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008667 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-env-overrides\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008765 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovn-node-metrics-cert\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.008882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-node-log\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009068 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-netd\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009144 4740 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-etc-openvswitch\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009170 4740 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-node-log\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009253 4740 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009283 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-ovn-kubernetes\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009352 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009376 4740 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-var-lib-openvswitch\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009441 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovnkube-script-lib\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009466 4740 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-kubelet\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009487 4740 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-ovn\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.009711 master-1 kubenswrapper[4740]: I1014 13:07:48.009585 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-run-netns\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011201 master-1 kubenswrapper[4740]: I1014 13:07:48.009613 4740 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-slash\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011201 master-1 kubenswrapper[4740]: I1014 13:07:48.009636 4740 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-systemd-units\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011201 master-1 kubenswrapper[4740]: I1014 13:07:48.009660 4740 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-openvswitch\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011201 master-1 kubenswrapper[4740]: I1014 13:07:48.009682 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-host-cni-bin\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011201 master-1 kubenswrapper[4740]: I1014 13:07:48.009749 4740 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-log-socket\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011201 master-1 kubenswrapper[4740]: I1014 13:07:48.009773 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b565ca7-6b58-4c77-9be7-495cc929fbad-env-overrides\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.011863 master-1 kubenswrapper[4740]: I1014 13:07:48.011783 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b565ca7-6b58-4c77-9be7-495cc929fbad-kube-api-access-s7fkv" (OuterVolumeSpecName: "kube-api-access-s7fkv") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "kube-api-access-s7fkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:07:48.012627 master-1 kubenswrapper[4740]: I1014 13:07:48.012565 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:07:48.014462 master-1 kubenswrapper[4740]: I1014 13:07:48.014402 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9b565ca7-6b58-4c77-9be7-495cc929fbad" (UID: "9b565ca7-6b58-4c77-9be7-495cc929fbad"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:07:48.110165 master-1 kubenswrapper[4740]: I1014 13:07:48.109973 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110165 master-1 kubenswrapper[4740]: I1014 13:07:48.110069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110165 master-1 kubenswrapper[4740]: I1014 13:07:48.110085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-kubelet\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110165 master-1 kubenswrapper[4740]: I1014 13:07:48.110144 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-kubelet\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110165 master-1 kubenswrapper[4740]: I1014 13:07:48.110162 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovnkube-config\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-systemd\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-systemd-units\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-systemd-units\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-slash\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-systemd\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110403 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovnkube-script-lib\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-slash\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110434 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-ovn\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgr5f\" (UniqueName: \"kubernetes.io/projected/4e6bd500-0de9-4c62-84f1-924e0ba066bb-kube-api-access-hgr5f\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110510 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-log-socket\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110572 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-cni-bin\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110604 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-var-lib-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-env-overrides\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.110667 master-1 kubenswrapper[4740]: I1014 13:07:48.110665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-run-netns\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110694 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovn-node-metrics-cert\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110713 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-run-ovn\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-node-log\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110727 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-node-log\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-var-lib-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-etc-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-etc-openvswitch\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-cni-netd\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-cni-bin\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110978 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7fkv\" (UniqueName: \"kubernetes.io/projected/9b565ca7-6b58-4c77-9be7-495cc929fbad-kube-api-access-s7fkv\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110999 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b565ca7-6b58-4c77-9be7-495cc929fbad-ovn-node-metrics-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.111018 4740 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b565ca7-6b58-4c77-9be7-495cc929fbad-run-systemd\") on node \"master-1\" DevicePath \"\"" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.111014 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-log-socket\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.110943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.111294 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-cni-netd\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.111536 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e6bd500-0de9-4c62-84f1-924e0ba066bb-host-run-netns\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.111550 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-env-overrides\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.111614 master-1 kubenswrapper[4740]: I1014 13:07:48.111555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovnkube-script-lib\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.112601 master-1 kubenswrapper[4740]: I1014 13:07:48.111774 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovnkube-config\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.115793 master-1 kubenswrapper[4740]: I1014 13:07:48.115733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e6bd500-0de9-4c62-84f1-924e0ba066bb-ovn-node-metrics-cert\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.135860 master-1 kubenswrapper[4740]: I1014 13:07:48.135780 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgr5f\" (UniqueName: \"kubernetes.io/projected/4e6bd500-0de9-4c62-84f1-924e0ba066bb-kube-api-access-hgr5f\") pod \"ovnkube-node-qvfnh\" (UID: \"4e6bd500-0de9-4c62-84f1-924e0ba066bb\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.217146 master-1 kubenswrapper[4740]: I1014 13:07:48.217025 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:48.233479 master-1 kubenswrapper[4740]: W1014 13:07:48.233376 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e6bd500_0de9_4c62_84f1_924e0ba066bb.slice/crio-432c5f693959b07cbe94884b05914c4066a538ee80cd87cd9be703db452ebabc WatchSource:0}: Error finding container 432c5f693959b07cbe94884b05914c4066a538ee80cd87cd9be703db452ebabc: Status 404 returned error can't find the container with id 432c5f693959b07cbe94884b05914c4066a538ee80cd87cd9be703db452ebabc Oct 14 13:07:48.258872 master-1 kubenswrapper[4740]: I1014 13:07:48.258810 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/ovnkube-controller/0.log" Oct 14 13:07:48.262134 master-1 kubenswrapper[4740]: I1014 13:07:48.262074 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/kube-rbac-proxy-ovn-metrics/0.log" Oct 14 13:07:48.263027 master-1 kubenswrapper[4740]: I1014 13:07:48.262970 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/kube-rbac-proxy-node/0.log" Oct 14 13:07:48.263827 master-1 kubenswrapper[4740]: I1014 13:07:48.263782 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/ovn-acl-logging/0.log" Oct 14 13:07:48.264677 master-1 kubenswrapper[4740]: I1014 13:07:48.264624 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-g2f76_9b565ca7-6b58-4c77-9be7-495cc929fbad/ovn-controller/0.log" Oct 14 13:07:48.265331 master-1 kubenswrapper[4740]: I1014 13:07:48.265291 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" exitCode=1 Oct 14 13:07:48.265463 master-1 kubenswrapper[4740]: I1014 13:07:48.265333 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" exitCode=0 Oct 14 13:07:48.265463 master-1 kubenswrapper[4740]: I1014 13:07:48.265350 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" exitCode=0 Oct 14 13:07:48.265463 master-1 kubenswrapper[4740]: I1014 13:07:48.265367 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" exitCode=0 Oct 14 13:07:48.265463 master-1 kubenswrapper[4740]: I1014 13:07:48.265415 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" exitCode=143 Oct 14 13:07:48.265463 master-1 kubenswrapper[4740]: I1014 13:07:48.265435 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" exitCode=143 Oct 14 13:07:48.265463 master-1 kubenswrapper[4740]: I1014 13:07:48.265454 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" exitCode=143 Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265469 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b565ca7-6b58-4c77-9be7-495cc929fbad" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" exitCode=143 Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265472 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265510 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265545 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265637 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265655 4740 scope.go:117] "RemoveContainer" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265658 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265704 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265717 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265750 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265764 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265778 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265789 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265801 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265813 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265825 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265837 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} Oct 14 13:07:48.265795 master-1 kubenswrapper[4740]: I1014 13:07:48.265848 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265863 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265881 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265894 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265906 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265918 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265929 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265940 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265951 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265962 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265973 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.265989 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g2f76" event={"ID":"9b565ca7-6b58-4c77-9be7-495cc929fbad","Type":"ContainerDied","Data":"0bef98c8075400ff6c25edc3bb3e77e22c3a5efdc43c9bd5abf9c2e2b3b8fd29"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266005 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266020 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266031 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266044 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266054 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266065 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266077 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266087 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} Oct 14 13:07:48.267088 master-1 kubenswrapper[4740]: I1014 13:07:48.266098 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} Oct 14 13:07:48.268137 master-1 kubenswrapper[4740]: I1014 13:07:48.267523 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"432c5f693959b07cbe94884b05914c4066a538ee80cd87cd9be703db452ebabc"} Oct 14 13:07:48.291935 master-1 kubenswrapper[4740]: I1014 13:07:48.291884 4740 scope.go:117] "RemoveContainer" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.323007 master-1 kubenswrapper[4740]: I1014 13:07:48.322935 4740 scope.go:117] "RemoveContainer" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.335573 master-1 kubenswrapper[4740]: I1014 13:07:48.334695 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g2f76"] Oct 14 13:07:48.338214 master-1 kubenswrapper[4740]: I1014 13:07:48.338177 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g2f76"] Oct 14 13:07:48.345510 master-1 kubenswrapper[4740]: I1014 13:07:48.345474 4740 scope.go:117] "RemoveContainer" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.354315 master-1 kubenswrapper[4740]: I1014 13:07:48.354259 4740 scope.go:117] "RemoveContainer" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.362527 master-1 kubenswrapper[4740]: I1014 13:07:48.362396 4740 scope.go:117] "RemoveContainer" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.373691 master-1 kubenswrapper[4740]: I1014 13:07:48.373656 4740 scope.go:117] "RemoveContainer" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" Oct 14 13:07:48.458567 master-1 kubenswrapper[4740]: I1014 13:07:48.458515 4740 scope.go:117] "RemoveContainer" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" Oct 14 13:07:48.469318 master-1 kubenswrapper[4740]: I1014 13:07:48.469247 4740 scope.go:117] "RemoveContainer" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" Oct 14 13:07:48.481021 master-1 kubenswrapper[4740]: I1014 13:07:48.480873 4740 scope.go:117] "RemoveContainer" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.481651 master-1 kubenswrapper[4740]: E1014 13:07:48.481584 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": container with ID starting with afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d not found: ID does not exist" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.481788 master-1 kubenswrapper[4740]: I1014 13:07:48.481645 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} err="failed to get container status \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": rpc error: code = NotFound desc = could not find container \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": container with ID starting with afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d not found: ID does not exist" Oct 14 13:07:48.481788 master-1 kubenswrapper[4740]: I1014 13:07:48.481692 4740 scope.go:117] "RemoveContainer" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.482088 master-1 kubenswrapper[4740]: E1014 13:07:48.482022 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": container with ID starting with c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26 not found: ID does not exist" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.482088 master-1 kubenswrapper[4740]: I1014 13:07:48.482057 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} err="failed to get container status \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": rpc error: code = NotFound desc = could not find container \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": container with ID starting with c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26 not found: ID does not exist" Oct 14 13:07:48.482088 master-1 kubenswrapper[4740]: I1014 13:07:48.482085 4740 scope.go:117] "RemoveContainer" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.482723 master-1 kubenswrapper[4740]: E1014 13:07:48.482652 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": container with ID starting with 19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6 not found: ID does not exist" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.482723 master-1 kubenswrapper[4740]: I1014 13:07:48.482689 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} err="failed to get container status \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": rpc error: code = NotFound desc = could not find container \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": container with ID starting with 19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6 not found: ID does not exist" Oct 14 13:07:48.482723 master-1 kubenswrapper[4740]: I1014 13:07:48.482705 4740 scope.go:117] "RemoveContainer" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.483000 master-1 kubenswrapper[4740]: E1014 13:07:48.482941 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": container with ID starting with 460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615 not found: ID does not exist" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.483000 master-1 kubenswrapper[4740]: I1014 13:07:48.482967 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} err="failed to get container status \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": rpc error: code = NotFound desc = could not find container \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": container with ID starting with 460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615 not found: ID does not exist" Oct 14 13:07:48.483000 master-1 kubenswrapper[4740]: I1014 13:07:48.482982 4740 scope.go:117] "RemoveContainer" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: E1014 13:07:48.483400 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": container with ID starting with a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab not found: ID does not exist" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: I1014 13:07:48.483479 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} err="failed to get container status \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": rpc error: code = NotFound desc = could not find container \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": container with ID starting with a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab not found: ID does not exist" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: I1014 13:07:48.483542 4740 scope.go:117] "RemoveContainer" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: E1014 13:07:48.484060 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": container with ID starting with 3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7 not found: ID does not exist" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: I1014 13:07:48.484087 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} err="failed to get container status \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": rpc error: code = NotFound desc = could not find container \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": container with ID starting with 3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7 not found: ID does not exist" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: I1014 13:07:48.484104 4740 scope.go:117] "RemoveContainer" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: E1014 13:07:48.484468 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": container with ID starting with de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd not found: ID does not exist" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: I1014 13:07:48.484583 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} err="failed to get container status \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": rpc error: code = NotFound desc = could not find container \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": container with ID starting with de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd not found: ID does not exist" Oct 14 13:07:48.484970 master-1 kubenswrapper[4740]: I1014 13:07:48.484621 4740 scope.go:117] "RemoveContainer" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" Oct 14 13:07:48.485575 master-1 kubenswrapper[4740]: E1014 13:07:48.485351 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": container with ID starting with bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68 not found: ID does not exist" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" Oct 14 13:07:48.485575 master-1 kubenswrapper[4740]: I1014 13:07:48.485373 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} err="failed to get container status \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": rpc error: code = NotFound desc = could not find container \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": container with ID starting with bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68 not found: ID does not exist" Oct 14 13:07:48.485575 master-1 kubenswrapper[4740]: I1014 13:07:48.485392 4740 scope.go:117] "RemoveContainer" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" Oct 14 13:07:48.485852 master-1 kubenswrapper[4740]: E1014 13:07:48.485756 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": container with ID starting with e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d not found: ID does not exist" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" Oct 14 13:07:48.485852 master-1 kubenswrapper[4740]: I1014 13:07:48.485776 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} err="failed to get container status \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": rpc error: code = NotFound desc = could not find container \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": container with ID starting with e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d not found: ID does not exist" Oct 14 13:07:48.485852 master-1 kubenswrapper[4740]: I1014 13:07:48.485790 4740 scope.go:117] "RemoveContainer" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.486213 master-1 kubenswrapper[4740]: I1014 13:07:48.486169 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} err="failed to get container status \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": rpc error: code = NotFound desc = could not find container \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": container with ID starting with afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d not found: ID does not exist" Oct 14 13:07:48.486303 master-1 kubenswrapper[4740]: I1014 13:07:48.486210 4740 scope.go:117] "RemoveContainer" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.486736 master-1 kubenswrapper[4740]: I1014 13:07:48.486670 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} err="failed to get container status \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": rpc error: code = NotFound desc = could not find container \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": container with ID starting with c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26 not found: ID does not exist" Oct 14 13:07:48.486793 master-1 kubenswrapper[4740]: I1014 13:07:48.486731 4740 scope.go:117] "RemoveContainer" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.487256 master-1 kubenswrapper[4740]: I1014 13:07:48.487143 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} err="failed to get container status \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": rpc error: code = NotFound desc = could not find container \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": container with ID starting with 19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6 not found: ID does not exist" Oct 14 13:07:48.487256 master-1 kubenswrapper[4740]: I1014 13:07:48.487169 4740 scope.go:117] "RemoveContainer" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.487614 master-1 kubenswrapper[4740]: I1014 13:07:48.487561 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} err="failed to get container status \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": rpc error: code = NotFound desc = could not find container \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": container with ID starting with 460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615 not found: ID does not exist" Oct 14 13:07:48.487614 master-1 kubenswrapper[4740]: I1014 13:07:48.487609 4740 scope.go:117] "RemoveContainer" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.488064 master-1 kubenswrapper[4740]: I1014 13:07:48.488006 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} err="failed to get container status \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": rpc error: code = NotFound desc = could not find container \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": container with ID starting with a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab not found: ID does not exist" Oct 14 13:07:48.488064 master-1 kubenswrapper[4740]: I1014 13:07:48.488054 4740 scope.go:117] "RemoveContainer" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.488489 master-1 kubenswrapper[4740]: I1014 13:07:48.488439 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} err="failed to get container status \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": rpc error: code = NotFound desc = could not find container \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": container with ID starting with 3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7 not found: ID does not exist" Oct 14 13:07:48.488568 master-1 kubenswrapper[4740]: I1014 13:07:48.488489 4740 scope.go:117] "RemoveContainer" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" Oct 14 13:07:48.488899 master-1 kubenswrapper[4740]: I1014 13:07:48.488761 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} err="failed to get container status \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": rpc error: code = NotFound desc = could not find container \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": container with ID starting with de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd not found: ID does not exist" Oct 14 13:07:48.488899 master-1 kubenswrapper[4740]: I1014 13:07:48.488791 4740 scope.go:117] "RemoveContainer" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" Oct 14 13:07:48.489178 master-1 kubenswrapper[4740]: I1014 13:07:48.489082 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} err="failed to get container status \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": rpc error: code = NotFound desc = could not find container \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": container with ID starting with bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68 not found: ID does not exist" Oct 14 13:07:48.489178 master-1 kubenswrapper[4740]: I1014 13:07:48.489114 4740 scope.go:117] "RemoveContainer" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" Oct 14 13:07:48.489646 master-1 kubenswrapper[4740]: I1014 13:07:48.489545 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} err="failed to get container status \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": rpc error: code = NotFound desc = could not find container \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": container with ID starting with e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d not found: ID does not exist" Oct 14 13:07:48.489646 master-1 kubenswrapper[4740]: I1014 13:07:48.489569 4740 scope.go:117] "RemoveContainer" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.489943 master-1 kubenswrapper[4740]: I1014 13:07:48.489838 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} err="failed to get container status \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": rpc error: code = NotFound desc = could not find container \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": container with ID starting with afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d not found: ID does not exist" Oct 14 13:07:48.489943 master-1 kubenswrapper[4740]: I1014 13:07:48.489863 4740 scope.go:117] "RemoveContainer" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.490551 master-1 kubenswrapper[4740]: I1014 13:07:48.490389 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} err="failed to get container status \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": rpc error: code = NotFound desc = could not find container \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": container with ID starting with c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26 not found: ID does not exist" Oct 14 13:07:48.490551 master-1 kubenswrapper[4740]: I1014 13:07:48.490486 4740 scope.go:117] "RemoveContainer" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.490979 master-1 kubenswrapper[4740]: I1014 13:07:48.490922 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} err="failed to get container status \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": rpc error: code = NotFound desc = could not find container \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": container with ID starting with 19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6 not found: ID does not exist" Oct 14 13:07:48.490979 master-1 kubenswrapper[4740]: I1014 13:07:48.490970 4740 scope.go:117] "RemoveContainer" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.491453 master-1 kubenswrapper[4740]: I1014 13:07:48.491375 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} err="failed to get container status \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": rpc error: code = NotFound desc = could not find container \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": container with ID starting with 460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615 not found: ID does not exist" Oct 14 13:07:48.491453 master-1 kubenswrapper[4740]: I1014 13:07:48.491421 4740 scope.go:117] "RemoveContainer" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.491900 master-1 kubenswrapper[4740]: I1014 13:07:48.491859 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} err="failed to get container status \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": rpc error: code = NotFound desc = could not find container \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": container with ID starting with a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab not found: ID does not exist" Oct 14 13:07:48.491975 master-1 kubenswrapper[4740]: I1014 13:07:48.491950 4740 scope.go:117] "RemoveContainer" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.492988 master-1 kubenswrapper[4740]: I1014 13:07:48.492929 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} err="failed to get container status \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": rpc error: code = NotFound desc = could not find container \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": container with ID starting with 3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7 not found: ID does not exist" Oct 14 13:07:48.492988 master-1 kubenswrapper[4740]: I1014 13:07:48.492979 4740 scope.go:117] "RemoveContainer" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" Oct 14 13:07:48.493696 master-1 kubenswrapper[4740]: I1014 13:07:48.493650 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} err="failed to get container status \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": rpc error: code = NotFound desc = could not find container \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": container with ID starting with de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd not found: ID does not exist" Oct 14 13:07:48.493696 master-1 kubenswrapper[4740]: I1014 13:07:48.493679 4740 scope.go:117] "RemoveContainer" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" Oct 14 13:07:48.494098 master-1 kubenswrapper[4740]: I1014 13:07:48.494032 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} err="failed to get container status \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": rpc error: code = NotFound desc = could not find container \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": container with ID starting with bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68 not found: ID does not exist" Oct 14 13:07:48.494098 master-1 kubenswrapper[4740]: I1014 13:07:48.494091 4740 scope.go:117] "RemoveContainer" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" Oct 14 13:07:48.494498 master-1 kubenswrapper[4740]: I1014 13:07:48.494447 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} err="failed to get container status \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": rpc error: code = NotFound desc = could not find container \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": container with ID starting with e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d not found: ID does not exist" Oct 14 13:07:48.494498 master-1 kubenswrapper[4740]: I1014 13:07:48.494482 4740 scope.go:117] "RemoveContainer" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.494928 master-1 kubenswrapper[4740]: I1014 13:07:48.494847 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} err="failed to get container status \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": rpc error: code = NotFound desc = could not find container \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": container with ID starting with afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d not found: ID does not exist" Oct 14 13:07:48.494928 master-1 kubenswrapper[4740]: I1014 13:07:48.494919 4740 scope.go:117] "RemoveContainer" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.495350 master-1 kubenswrapper[4740]: I1014 13:07:48.495299 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} err="failed to get container status \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": rpc error: code = NotFound desc = could not find container \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": container with ID starting with c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26 not found: ID does not exist" Oct 14 13:07:48.495350 master-1 kubenswrapper[4740]: I1014 13:07:48.495336 4740 scope.go:117] "RemoveContainer" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.495615 master-1 kubenswrapper[4740]: I1014 13:07:48.495572 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} err="failed to get container status \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": rpc error: code = NotFound desc = could not find container \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": container with ID starting with 19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6 not found: ID does not exist" Oct 14 13:07:48.495615 master-1 kubenswrapper[4740]: I1014 13:07:48.495601 4740 scope.go:117] "RemoveContainer" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.495985 master-1 kubenswrapper[4740]: I1014 13:07:48.495921 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} err="failed to get container status \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": rpc error: code = NotFound desc = could not find container \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": container with ID starting with 460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615 not found: ID does not exist" Oct 14 13:07:48.496039 master-1 kubenswrapper[4740]: I1014 13:07:48.495980 4740 scope.go:117] "RemoveContainer" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.496483 master-1 kubenswrapper[4740]: I1014 13:07:48.496440 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} err="failed to get container status \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": rpc error: code = NotFound desc = could not find container \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": container with ID starting with a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab not found: ID does not exist" Oct 14 13:07:48.496483 master-1 kubenswrapper[4740]: I1014 13:07:48.496471 4740 scope.go:117] "RemoveContainer" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.496868 master-1 kubenswrapper[4740]: I1014 13:07:48.496797 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} err="failed to get container status \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": rpc error: code = NotFound desc = could not find container \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": container with ID starting with 3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7 not found: ID does not exist" Oct 14 13:07:48.496926 master-1 kubenswrapper[4740]: I1014 13:07:48.496865 4740 scope.go:117] "RemoveContainer" containerID="de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd" Oct 14 13:07:48.497257 master-1 kubenswrapper[4740]: I1014 13:07:48.497194 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd"} err="failed to get container status \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": rpc error: code = NotFound desc = could not find container \"de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd\": container with ID starting with de1ebe6facb33496912cf282691f4553cd53a170db3ff7a08737ea125d3218bd not found: ID does not exist" Oct 14 13:07:48.497257 master-1 kubenswrapper[4740]: I1014 13:07:48.497245 4740 scope.go:117] "RemoveContainer" containerID="bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68" Oct 14 13:07:48.497691 master-1 kubenswrapper[4740]: I1014 13:07:48.497617 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68"} err="failed to get container status \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": rpc error: code = NotFound desc = could not find container \"bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68\": container with ID starting with bc52e5dec4c72e778f0bcacc2db23fb3f9ce1a8c3d9c05a9185503a63bd2ba68 not found: ID does not exist" Oct 14 13:07:48.497742 master-1 kubenswrapper[4740]: I1014 13:07:48.497685 4740 scope.go:117] "RemoveContainer" containerID="e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d" Oct 14 13:07:48.498146 master-1 kubenswrapper[4740]: I1014 13:07:48.498089 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d"} err="failed to get container status \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": rpc error: code = NotFound desc = could not find container \"e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d\": container with ID starting with e7f28a80c64a8f7875183c38692855f3d3da41036c1c02f30b488e701b7ab56d not found: ID does not exist" Oct 14 13:07:48.498146 master-1 kubenswrapper[4740]: I1014 13:07:48.498135 4740 scope.go:117] "RemoveContainer" containerID="afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d" Oct 14 13:07:48.498687 master-1 kubenswrapper[4740]: I1014 13:07:48.498609 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d"} err="failed to get container status \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": rpc error: code = NotFound desc = could not find container \"afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d\": container with ID starting with afecae17555ed335ad4d576d44bf0e4ceb669a41a579a83459ab3f17fb502f2d not found: ID does not exist" Oct 14 13:07:48.498753 master-1 kubenswrapper[4740]: I1014 13:07:48.498687 4740 scope.go:117] "RemoveContainer" containerID="c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26" Oct 14 13:07:48.499137 master-1 kubenswrapper[4740]: I1014 13:07:48.499086 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26"} err="failed to get container status \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": rpc error: code = NotFound desc = could not find container \"c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26\": container with ID starting with c003d1b7a6c0e9fa273b324c3232e3fdd88f7713fb3004afeb0cd2acab801d26 not found: ID does not exist" Oct 14 13:07:48.499137 master-1 kubenswrapper[4740]: I1014 13:07:48.499121 4740 scope.go:117] "RemoveContainer" containerID="19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6" Oct 14 13:07:48.499516 master-1 kubenswrapper[4740]: I1014 13:07:48.499471 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6"} err="failed to get container status \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": rpc error: code = NotFound desc = could not find container \"19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6\": container with ID starting with 19ca4fe0af4d8f56a66c976f007ddb5d8c1a3da752ee447fa958bf1efa9af9a6 not found: ID does not exist" Oct 14 13:07:48.499599 master-1 kubenswrapper[4740]: I1014 13:07:48.499516 4740 scope.go:117] "RemoveContainer" containerID="460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615" Oct 14 13:07:48.499871 master-1 kubenswrapper[4740]: I1014 13:07:48.499836 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615"} err="failed to get container status \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": rpc error: code = NotFound desc = could not find container \"460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615\": container with ID starting with 460a185b2ac4f26c1982482e350a40f400b35a5aa4d8aafc01860eba550fd615 not found: ID does not exist" Oct 14 13:07:48.499871 master-1 kubenswrapper[4740]: I1014 13:07:48.499867 4740 scope.go:117] "RemoveContainer" containerID="a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab" Oct 14 13:07:48.500255 master-1 kubenswrapper[4740]: I1014 13:07:48.500183 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab"} err="failed to get container status \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": rpc error: code = NotFound desc = could not find container \"a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab\": container with ID starting with a7c09d7392037c63dd6762c22181eb4a507ea3fe78f94a2fc1e1abcb41d049ab not found: ID does not exist" Oct 14 13:07:48.500347 master-1 kubenswrapper[4740]: I1014 13:07:48.500257 4740 scope.go:117] "RemoveContainer" containerID="3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7" Oct 14 13:07:48.500606 master-1 kubenswrapper[4740]: I1014 13:07:48.500566 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7"} err="failed to get container status \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": rpc error: code = NotFound desc = could not find container \"3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7\": container with ID starting with 3b2aa035fa9bf2fa89879e8e3f5d9ba54cb7bb6ee613ef209a2e76f4a395e6d7 not found: ID does not exist" Oct 14 13:07:48.943764 master-1 kubenswrapper[4740]: I1014 13:07:48.943637 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:48.944781 master-1 kubenswrapper[4740]: E1014 13:07:48.943821 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:48.950621 master-1 kubenswrapper[4740]: I1014 13:07:48.950557 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b565ca7-6b58-4c77-9be7-495cc929fbad" path="/var/lib/kubelet/pods/9b565ca7-6b58-4c77-9be7-495cc929fbad/volumes" Oct 14 13:07:49.274856 master-1 kubenswrapper[4740]: I1014 13:07:49.274753 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e6bd500-0de9-4c62-84f1-924e0ba066bb" containerID="695690d379e66d8480e9c584e267bcf283d8f6932d94f3a15e82ebb66b896b8c" exitCode=0 Oct 14 13:07:49.274856 master-1 kubenswrapper[4740]: I1014 13:07:49.274811 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerDied","Data":"695690d379e66d8480e9c584e267bcf283d8f6932d94f3a15e82ebb66b896b8c"} Oct 14 13:07:49.943568 master-1 kubenswrapper[4740]: I1014 13:07:49.943498 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:49.943798 master-1 kubenswrapper[4740]: E1014 13:07:49.943742 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:50.281590 master-1 kubenswrapper[4740]: I1014 13:07:50.281479 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"4a9606ee2f32831ba88336d5e8a2b9c569d54389ab597ca402d0c8d9e15c4d91"} Oct 14 13:07:50.281590 master-1 kubenswrapper[4740]: I1014 13:07:50.281529 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"996372f8c6b70bab2a739967f8a85bc783d9400f7c1d15dfffd0e5ec872b815f"} Oct 14 13:07:50.281590 master-1 kubenswrapper[4740]: I1014 13:07:50.281545 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"7305cbbcb025f845948c8a95f78fc47b25294efced5553ccedec3fa94bf51ca9"} Oct 14 13:07:50.281590 master-1 kubenswrapper[4740]: I1014 13:07:50.281558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"1d32c8f5998c5a011514e460023e4c9f792354432890fdf7d11d32f67b204fe7"} Oct 14 13:07:50.281590 master-1 kubenswrapper[4740]: I1014 13:07:50.281570 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"c35c690d3c92fe7199a5247005e1d609e44134c5106703757a4942512153be76"} Oct 14 13:07:50.281590 master-1 kubenswrapper[4740]: I1014 13:07:50.281581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"47c1e25350b6e31be742d21a799d9e78988cb93eb1690d5eb67308878db44569"} Oct 14 13:07:50.943265 master-1 kubenswrapper[4740]: I1014 13:07:50.943103 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:50.943744 master-1 kubenswrapper[4740]: E1014 13:07:50.943371 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:51.943203 master-1 kubenswrapper[4740]: I1014 13:07:51.942796 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:51.944070 master-1 kubenswrapper[4740]: E1014 13:07:51.943322 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:52.292997 master-1 kubenswrapper[4740]: I1014 13:07:52.292910 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"26b4ccbc31b946205b010c66087afb5763b00ce720920e9c4f708563498a6b10"} Oct 14 13:07:52.943038 master-1 kubenswrapper[4740]: I1014 13:07:52.942920 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:52.943200 master-1 kubenswrapper[4740]: E1014 13:07:52.943129 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:53.943916 master-1 kubenswrapper[4740]: I1014 13:07:53.943826 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:53.945134 master-1 kubenswrapper[4740]: E1014 13:07:53.944081 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:54.943427 master-1 kubenswrapper[4740]: I1014 13:07:54.943214 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:54.943892 master-1 kubenswrapper[4740]: E1014 13:07:54.943683 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:55.310580 master-1 kubenswrapper[4740]: I1014 13:07:55.310150 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" event={"ID":"4e6bd500-0de9-4c62-84f1-924e0ba066bb","Type":"ContainerStarted","Data":"ca4a02aa2bc207a924d04bf7cd5daa757471f818422be90f316c9af290b63b4d"} Oct 14 13:07:55.311287 master-1 kubenswrapper[4740]: I1014 13:07:55.310738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:55.311287 master-1 kubenswrapper[4740]: I1014 13:07:55.310810 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:55.311287 master-1 kubenswrapper[4740]: I1014 13:07:55.310835 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:07:55.342882 master-1 kubenswrapper[4740]: I1014 13:07:55.342049 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" podStartSLOduration=8.341991119 podStartE2EDuration="8.341991119s" podCreationTimestamp="2025-10-14 13:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:07:55.340873252 +0000 UTC m=+101.151162621" watchObservedRunningTime="2025-10-14 13:07:55.341991119 +0000 UTC m=+101.152280478" Oct 14 13:07:55.942824 master-1 kubenswrapper[4740]: I1014 13:07:55.942756 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:55.943052 master-1 kubenswrapper[4740]: E1014 13:07:55.942941 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:56.483119 master-1 kubenswrapper[4740]: I1014 13:07:56.483029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:56.483838 master-1 kubenswrapper[4740]: E1014 13:07:56.483296 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:07:56.483838 master-1 kubenswrapper[4740]: E1014 13:07:56.483322 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:07:56.483838 master-1 kubenswrapper[4740]: E1014 13:07:56.483341 4740 projected.go:194] Error preparing data for projected volume kube-api-access-mbd6g for pod openshift-network-diagnostics/network-check-target-sndvg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:56.483838 master-1 kubenswrapper[4740]: E1014 13:07:56.483407 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g podName:a745a9ed-4507-491b-b50f-7a5e3837b928 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:28.483383521 +0000 UTC m=+134.293672880 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-mbd6g" (UniqueName: "kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g") pod "network-check-target-sndvg" (UID: "a745a9ed-4507-491b-b50f-7a5e3837b928") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:07:56.944072 master-1 kubenswrapper[4740]: I1014 13:07:56.943958 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:56.944412 master-1 kubenswrapper[4740]: E1014 13:07:56.944192 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:57.943010 master-1 kubenswrapper[4740]: I1014 13:07:57.942724 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:57.943010 master-1 kubenswrapper[4740]: E1014 13:07:57.942954 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8l654" podUID="1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1" Oct 14 13:07:58.943170 master-1 kubenswrapper[4740]: I1014 13:07:58.943037 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:07:58.944136 master-1 kubenswrapper[4740]: E1014 13:07:58.943272 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-sndvg" podUID="a745a9ed-4507-491b-b50f-7a5e3837b928" Oct 14 13:07:59.520698 master-1 kubenswrapper[4740]: I1014 13:07:59.520586 4740 kubelet_node_status.go:724] "Recording event message for node" node="master-1" event="NodeReady" Oct 14 13:07:59.520698 master-1 kubenswrapper[4740]: I1014 13:07:59.520717 4740 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Oct 14 13:07:59.555282 master-1 kubenswrapper[4740]: I1014 13:07:59.555168 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk"] Oct 14 13:07:59.555689 master-1 kubenswrapper[4740]: I1014 13:07:59.555641 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.562472 master-1 kubenswrapper[4740]: I1014 13:07:59.562401 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Oct 14 13:07:59.562770 master-1 kubenswrapper[4740]: I1014 13:07:59.562708 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 14 13:07:59.562770 master-1 kubenswrapper[4740]: I1014 13:07:59.562735 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Oct 14 13:07:59.563203 master-1 kubenswrapper[4740]: I1014 13:07:59.562801 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 14 13:07:59.565587 master-1 kubenswrapper[4740]: I1014 13:07:59.565480 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh"] Oct 14 13:07:59.566087 master-1 kubenswrapper[4740]: I1014 13:07:59.566050 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.566689 master-1 kubenswrapper[4740]: I1014 13:07:59.566647 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh"] Oct 14 13:07:59.567063 master-1 kubenswrapper[4740]: I1014 13:07:59.566989 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.567310 master-1 kubenswrapper[4740]: I1014 13:07:59.567269 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf"] Oct 14 13:07:59.567720 master-1 kubenswrapper[4740]: I1014 13:07:59.567685 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" Oct 14 13:07:59.569134 master-1 kubenswrapper[4740]: I1014 13:07:59.568899 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Oct 14 13:07:59.569134 master-1 kubenswrapper[4740]: I1014 13:07:59.568908 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Oct 14 13:07:59.569550 master-1 kubenswrapper[4740]: I1014 13:07:59.569508 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Oct 14 13:07:59.569691 master-1 kubenswrapper[4740]: I1014 13:07:59.569594 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-s66vj"] Oct 14 13:07:59.570024 master-1 kubenswrapper[4740]: I1014 13:07:59.569982 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 14 13:07:59.570416 master-1 kubenswrapper[4740]: I1014 13:07:59.570379 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 14 13:07:59.570585 master-1 kubenswrapper[4740]: I1014 13:07:59.570430 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Oct 14 13:07:59.571004 master-1 kubenswrapper[4740]: I1014 13:07:59.570941 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.571097 master-1 kubenswrapper[4740]: I1014 13:07:59.571029 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Oct 14 13:07:59.571097 master-1 kubenswrapper[4740]: I1014 13:07:59.571058 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 14 13:07:59.571097 master-1 kubenswrapper[4740]: I1014 13:07:59.571065 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Oct 14 13:07:59.571517 master-1 kubenswrapper[4740]: I1014 13:07:59.571464 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.571929 master-1 kubenswrapper[4740]: I1014 13:07:59.571864 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw"] Oct 14 13:07:59.572550 master-1 kubenswrapper[4740]: I1014 13:07:59.572505 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.573840 master-1 kubenswrapper[4740]: I1014 13:07:59.573278 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj"] Oct 14 13:07:59.573840 master-1 kubenswrapper[4740]: I1014 13:07:59.573709 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.573840 master-1 kubenswrapper[4740]: I1014 13:07:59.573778 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d"] Oct 14 13:07:59.574158 master-1 kubenswrapper[4740]: I1014 13:07:59.574073 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.574944 master-1 kubenswrapper[4740]: I1014 13:07:59.574876 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw"] Oct 14 13:07:59.575335 master-1 kubenswrapper[4740]: I1014 13:07:59.575293 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.576555 master-1 kubenswrapper[4740]: I1014 13:07:59.576484 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt"] Oct 14 13:07:59.577136 master-1 kubenswrapper[4740]: I1014 13:07:59.577086 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.577648 master-1 kubenswrapper[4740]: I1014 13:07:59.577604 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 14 13:07:59.577937 master-1 kubenswrapper[4740]: I1014 13:07:59.577661 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 14 13:07:59.577937 master-1 kubenswrapper[4740]: I1014 13:07:59.577701 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv"] Oct 14 13:07:59.578351 master-1 kubenswrapper[4740]: I1014 13:07:59.578012 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 14 13:07:59.578351 master-1 kubenswrapper[4740]: I1014 13:07:59.578040 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:07:59.579374 master-1 kubenswrapper[4740]: I1014 13:07:59.578811 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 14 13:07:59.579374 master-1 kubenswrapper[4740]: I1014 13:07:59.578924 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6"] Oct 14 13:07:59.579374 master-1 kubenswrapper[4740]: I1014 13:07:59.579305 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Oct 14 13:07:59.579902 master-1 kubenswrapper[4740]: I1014 13:07:59.579846 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 14 13:07:59.580085 master-1 kubenswrapper[4740]: I1014 13:07:59.579902 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.580085 master-1 kubenswrapper[4740]: I1014 13:07:59.580034 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 14 13:07:59.580085 master-1 kubenswrapper[4740]: I1014 13:07:59.580062 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc"] Oct 14 13:07:59.580405 master-1 kubenswrapper[4740]: I1014 13:07:59.580206 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.580513 master-1 kubenswrapper[4740]: I1014 13:07:59.580461 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 14 13:07:59.584020 master-1 kubenswrapper[4740]: I1014 13:07:59.583909 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.585676 master-1 kubenswrapper[4740]: I1014 13:07:59.585141 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh"] Oct 14 13:07:59.586720 master-1 kubenswrapper[4740]: I1014 13:07:59.586489 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 14 13:07:59.586720 master-1 kubenswrapper[4740]: I1014 13:07:59.586545 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 14 13:07:59.586720 master-1 kubenswrapper[4740]: I1014 13:07:59.586696 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md"] Oct 14 13:07:59.587714 master-1 kubenswrapper[4740]: I1014 13:07:59.587628 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Oct 14 13:07:59.587714 master-1 kubenswrapper[4740]: I1014 13:07:59.587650 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.587714 master-1 kubenswrapper[4740]: I1014 13:07:59.587636 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Oct 14 13:07:59.591069 master-1 kubenswrapper[4740]: I1014 13:07:59.590328 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk"] Oct 14 13:07:59.591069 master-1 kubenswrapper[4740]: I1014 13:07:59.590518 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:07:59.591498 master-1 kubenswrapper[4740]: I1014 13:07:59.591384 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 14 13:07:59.592133 master-1 kubenswrapper[4740]: I1014 13:07:59.591831 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 14 13:07:59.592133 master-1 kubenswrapper[4740]: I1014 13:07:59.591840 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.592133 master-1 kubenswrapper[4740]: I1014 13:07:59.591886 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592479 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592500 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592694 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592769 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592848 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592864 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-s66vj"] Oct 14 13:07:59.593320 master-1 kubenswrapper[4740]: I1014 13:07:59.592884 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Oct 14 13:07:59.595358 master-1 kubenswrapper[4740]: I1014 13:07:59.594971 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh"] Oct 14 13:07:59.596854 master-1 kubenswrapper[4740]: I1014 13:07:59.596807 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c"] Oct 14 13:07:59.597534 master-1 kubenswrapper[4740]: I1014 13:07:59.597474 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.597636 master-1 kubenswrapper[4740]: I1014 13:07:59.597552 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2"] Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.598260 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.598287 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.598677 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf"] Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.598938 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.598947 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.598993 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 14 13:07:59.599105 master-1 kubenswrapper[4740]: I1014 13:07:59.599036 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 14 13:07:59.600281 master-1 kubenswrapper[4740]: I1014 13:07:59.600204 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv"] Oct 14 13:07:59.600690 master-1 kubenswrapper[4740]: I1014 13:07:59.600633 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 14 13:07:59.604628 master-1 kubenswrapper[4740]: I1014 13:07:59.604571 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 14 13:07:59.606665 master-1 kubenswrapper[4740]: I1014 13:07:59.606589 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.606917 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607209 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw"] Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d"] Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607353 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-config\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607425 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpgx\" (UniqueName: \"kubernetes.io/projected/62ef5e24-de36-454a-a34c-e741a86a6f96-kube-api-access-nbpgx\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607489 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt496\" (UniqueName: \"kubernetes.io/projected/1fa31cdd-e051-4987-a1a2-814fc7445e6b-kube-api-access-nt496\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607537 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-config\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607582 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzx6\" (UniqueName: \"kubernetes.io/projected/db9c19df-41e6-4216-829f-dd2975ff5108-kube-api-access-ztzx6\") pod \"csi-snapshot-controller-operator-7ff96dd767-9htmf\" (UID: \"db9c19df-41e6-4216-829f-dd2975ff5108\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.607807 master-1 kubenswrapper[4740]: I1014 13:07:59.607671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ca808a-394d-4a17-ac12-1df264c7ed92-images\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.611586 master-1 kubenswrapper[4740]: I1014 13:07:59.608500 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt"] Oct 14 13:07:59.612787 master-1 kubenswrapper[4740]: I1014 13:07:59.612703 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.612924 master-1 kubenswrapper[4740]: I1014 13:07:59.612805 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 14 13:07:59.613783 master-1 kubenswrapper[4740]: I1014 13:07:59.612813 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.613783 master-1 kubenswrapper[4740]: I1014 13:07:59.613715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/398ba6fd-0f8f-46af-b690-61a6eec9176b-trusted-ca\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.613976 master-1 kubenswrapper[4740]: I1014 13:07:59.613783 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/398ba6fd-0f8f-46af-b690-61a6eec9176b-bound-sa-token\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.613976 master-1 kubenswrapper[4740]: I1014 13:07:59.613886 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655ad1ce-582a-4728-8bfd-ca4164468de3-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.613976 master-1 kubenswrapper[4740]: I1014 13:07:59.613940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzc47\" (UniqueName: \"kubernetes.io/projected/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-kube-api-access-dzc47\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:07:59.614163 master-1 kubenswrapper[4740]: I1014 13:07:59.614001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxl25\" (UniqueName: \"kubernetes.io/projected/c4ca808a-394d-4a17-ac12-1df264c7ed92-kube-api-access-sxl25\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.614163 master-1 kubenswrapper[4740]: I1014 13:07:59.614055 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b51ef0bc-8b0e-4fab-b101-660ed408924c-images\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.614163 master-1 kubenswrapper[4740]: I1014 13:07:59.614110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1a35e1e-333f-480c-b1d6-059475700627-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.614392 master-1 kubenswrapper[4740]: I1014 13:07:59.614161 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f8b5ead9-7212-4a2f-8105-92d1c5384308-available-featuregates\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.614392 master-1 kubenswrapper[4740]: I1014 13:07:59.614216 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/62ef5e24-de36-454a-a34c-e741a86a6f96-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.614521 master-1 kubenswrapper[4740]: I1014 13:07:59.614379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.614521 master-1 kubenswrapper[4740]: I1014 13:07:59.614443 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4ca808a-394d-4a17-ac12-1df264c7ed92-auth-proxy-config\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.614521 master-1 kubenswrapper[4740]: I1014 13:07:59.614485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jkb\" (UniqueName: \"kubernetes.io/projected/f8b5ead9-7212-4a2f-8105-92d1c5384308-kube-api-access-j9jkb\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.614726 master-1 kubenswrapper[4740]: I1014 13:07:59.614542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.614917 master-1 kubenswrapper[4740]: I1014 13:07:59.614868 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l"] Oct 14 13:07:59.615436 master-1 kubenswrapper[4740]: I1014 13:07:59.614999 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d8hx\" (UniqueName: \"kubernetes.io/projected/ab511c1d-28e3-448a-86ec-cea21871fd26-kube-api-access-4d8hx\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.616807 master-1 kubenswrapper[4740]: I1014 13:07:59.615824 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g"] Oct 14 13:07:59.616807 master-1 kubenswrapper[4740]: I1014 13:07:59.616190 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:07:59.616807 master-1 kubenswrapper[4740]: I1014 13:07:59.616612 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51ef0bc-8b0e-4fab-b101-660ed408924c-config\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.616807 master-1 kubenswrapper[4740]: I1014 13:07:59.616652 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.616807 master-1 kubenswrapper[4740]: I1014 13:07:59.616691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:07:59.616807 master-1 kubenswrapper[4740]: I1014 13:07:59.616731 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.616734 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz47q\" (UniqueName: \"kubernetes.io/projected/398ba6fd-0f8f-46af-b690-61a6eec9176b-kube-api-access-tz47q\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.616872 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98bm6\" (UniqueName: \"kubernetes.io/projected/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-kube-api-access-98bm6\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.616909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhdd\" (UniqueName: \"kubernetes.io/projected/655ad1ce-582a-4728-8bfd-ca4164468de3-kube-api-access-klhdd\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.616932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.616964 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmhg\" (UniqueName: \"kubernetes.io/projected/b51ef0bc-8b0e-4fab-b101-660ed408924c-kube-api-access-wlmhg\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.616989 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617015 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8b5ead9-7212-4a2f-8105-92d1c5384308-serving-cert\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617083 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab511c1d-28e3-448a-86ec-cea21871fd26-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617104 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-images\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.617265 master-1 kubenswrapper[4740]: I1014 13:07:59.617177 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.617203 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.617248 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.617268 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2tt\" (UniqueName: \"kubernetes.io/projected/7be129fe-d04d-4384-a0e9-76b3148a1f3e-kube-api-access-zk2tt\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.617293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a35e1e-333f-480c-b1d6-059475700627-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.617312 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dlkx\" (UniqueName: \"kubernetes.io/projected/b1a35e1e-333f-480c-b1d6-059475700627-kube-api-access-5dlkx\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.617658 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt"] Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.618184 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q"] Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.618536 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.618612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.620320 master-1 kubenswrapper[4740]: I1014 13:07:59.619520 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Oct 14 13:07:59.625088 master-1 kubenswrapper[4740]: I1014 13:07:59.625028 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-chrmm"] Oct 14 13:07:59.625871 master-1 kubenswrapper[4740]: I1014 13:07:59.625828 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp"] Oct 14 13:07:59.626391 master-1 kubenswrapper[4740]: I1014 13:07:59.626352 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.626892 master-1 kubenswrapper[4740]: I1014 13:07:59.626848 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.628510 master-1 kubenswrapper[4740]: I1014 13:07:59.627964 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 14 13:07:59.630793 master-1 kubenswrapper[4740]: I1014 13:07:59.630221 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 14 13:07:59.631829 master-1 kubenswrapper[4740]: I1014 13:07:59.631149 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 14 13:07:59.631829 master-1 kubenswrapper[4740]: I1014 13:07:59.631360 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 14 13:07:59.631829 master-1 kubenswrapper[4740]: I1014 13:07:59.631660 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 14 13:07:59.631829 master-1 kubenswrapper[4740]: I1014 13:07:59.631795 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 14 13:07:59.632444 master-1 kubenswrapper[4740]: I1014 13:07:59.632412 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl"] Oct 14 13:07:59.632535 master-1 kubenswrapper[4740]: I1014 13:07:59.632498 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Oct 14 13:07:59.633565 master-1 kubenswrapper[4740]: I1014 13:07:59.633516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.633984 master-1 kubenswrapper[4740]: I1014 13:07:59.633930 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 14 13:07:59.634452 master-1 kubenswrapper[4740]: I1014 13:07:59.634395 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l"] Oct 14 13:07:59.634647 master-1 kubenswrapper[4740]: I1014 13:07:59.634600 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 14 13:07:59.637587 master-1 kubenswrapper[4740]: I1014 13:07:59.637512 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.637728 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638101 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638327 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638458 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638481 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638526 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc"] Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638561 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 14 13:07:59.638754 master-1 kubenswrapper[4740]: I1014 13:07:59.638561 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 14 13:07:59.639287 master-1 kubenswrapper[4740]: I1014 13:07:59.639132 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.639601 master-1 kubenswrapper[4740]: I1014 13:07:59.639546 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Oct 14 13:07:59.639601 master-1 kubenswrapper[4740]: I1014 13:07:59.639567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Oct 14 13:07:59.639730 master-1 kubenswrapper[4740]: I1014 13:07:59.639599 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-djh96"] Oct 14 13:07:59.639903 master-1 kubenswrapper[4740]: I1014 13:07:59.639851 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 14 13:07:59.640189 master-1 kubenswrapper[4740]: I1014 13:07:59.640139 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 14 13:07:59.640572 master-1 kubenswrapper[4740]: I1014 13:07:59.640530 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Oct 14 13:07:59.642104 master-1 kubenswrapper[4740]: I1014 13:07:59.641416 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-gldlr"] Oct 14 13:07:59.642104 master-1 kubenswrapper[4740]: I1014 13:07:59.641698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Oct 14 13:07:59.642104 master-1 kubenswrapper[4740]: I1014 13:07:59.641865 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Oct 14 13:07:59.642104 master-1 kubenswrapper[4740]: I1014 13:07:59.641897 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.642370 master-1 kubenswrapper[4740]: I1014 13:07:59.642291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.642435 master-1 kubenswrapper[4740]: I1014 13:07:59.642373 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-m6qfh"] Oct 14 13:07:59.643146 master-1 kubenswrapper[4740]: I1014 13:07:59.643068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.643632 master-1 kubenswrapper[4740]: I1014 13:07:59.643481 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.643632 master-1 kubenswrapper[4740]: I1014 13:07:59.643482 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Oct 14 13:07:59.643812 master-1 kubenswrapper[4740]: I1014 13:07:59.643653 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 14 13:07:59.643812 master-1 kubenswrapper[4740]: I1014 13:07:59.643718 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.644036 master-1 kubenswrapper[4740]: I1014 13:07:59.644001 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 14 13:07:59.644447 master-1 kubenswrapper[4740]: I1014 13:07:59.644395 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 14 13:07:59.645640 master-1 kubenswrapper[4740]: I1014 13:07:59.645328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 14 13:07:59.645640 master-1 kubenswrapper[4740]: I1014 13:07:59.645501 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 14 13:07:59.645640 master-1 kubenswrapper[4740]: I1014 13:07:59.645631 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 14 13:07:59.646046 master-1 kubenswrapper[4740]: I1014 13:07:59.645946 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 14 13:07:59.646731 master-1 kubenswrapper[4740]: I1014 13:07:59.646669 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-nh2qc"] Oct 14 13:07:59.647810 master-1 kubenswrapper[4740]: I1014 13:07:59.647627 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 14 13:07:59.647810 master-1 kubenswrapper[4740]: I1014 13:07:59.647731 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 14 13:07:59.647810 master-1 kubenswrapper[4740]: I1014 13:07:59.647782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:07:59.649149 master-1 kubenswrapper[4740]: I1014 13:07:59.648941 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t"] Oct 14 13:07:59.649149 master-1 kubenswrapper[4740]: I1014 13:07:59.648977 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 14 13:07:59.649426 master-1 kubenswrapper[4740]: I1014 13:07:59.649362 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.650483 master-1 kubenswrapper[4740]: I1014 13:07:59.649971 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.650483 master-1 kubenswrapper[4740]: I1014 13:07:59.650413 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-mgc7h"] Oct 14 13:07:59.651878 master-1 kubenswrapper[4740]: I1014 13:07:59.651802 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c"] Oct 14 13:07:59.652052 master-1 kubenswrapper[4740]: I1014 13:07:59.651890 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:07:59.653211 master-1 kubenswrapper[4740]: I1014 13:07:59.653142 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-9npgz"] Oct 14 13:07:59.654276 master-1 kubenswrapper[4740]: I1014 13:07:59.654187 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-mzrkb"] Oct 14 13:07:59.655330 master-1 kubenswrapper[4740]: I1014 13:07:59.654923 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.655330 master-1 kubenswrapper[4740]: I1014 13:07:59.654996 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Oct 14 13:07:59.655330 master-1 kubenswrapper[4740]: I1014 13:07:59.655109 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc"] Oct 14 13:07:59.655697 master-1 kubenswrapper[4740]: I1014 13:07:59.655401 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:07:59.655697 master-1 kubenswrapper[4740]: I1014 13:07:59.655676 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.657220 master-1 kubenswrapper[4740]: I1014 13:07:59.657178 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2"] Oct 14 13:07:59.658782 master-1 kubenswrapper[4740]: I1014 13:07:59.658737 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md"] Oct 14 13:07:59.660213 master-1 kubenswrapper[4740]: I1014 13:07:59.660166 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc"] Oct 14 13:07:59.661611 master-1 kubenswrapper[4740]: I1014 13:07:59.661545 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj"] Oct 14 13:07:59.662654 master-1 kubenswrapper[4740]: I1014 13:07:59.662613 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.662735 master-1 kubenswrapper[4740]: I1014 13:07:59.662662 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 14 13:07:59.662735 master-1 kubenswrapper[4740]: I1014 13:07:59.662710 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 14 13:07:59.662809 master-1 kubenswrapper[4740]: I1014 13:07:59.662734 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 14 13:07:59.662809 master-1 kubenswrapper[4740]: I1014 13:07:59.662747 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 14 13:07:59.663442 master-1 kubenswrapper[4740]: I1014 13:07:59.662894 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 14 13:07:59.663442 master-1 kubenswrapper[4740]: I1014 13:07:59.663080 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 14 13:07:59.663442 master-1 kubenswrapper[4740]: I1014 13:07:59.663185 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 14 13:07:59.663442 master-1 kubenswrapper[4740]: I1014 13:07:59.663270 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6"] Oct 14 13:07:59.663442 master-1 kubenswrapper[4740]: I1014 13:07:59.663314 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 14 13:07:59.673632 master-1 kubenswrapper[4740]: I1014 13:07:59.673567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 14 13:07:59.675711 master-1 kubenswrapper[4740]: I1014 13:07:59.675644 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw"] Oct 14 13:07:59.676752 master-1 kubenswrapper[4740]: I1014 13:07:59.676698 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 14 13:07:59.678584 master-1 kubenswrapper[4740]: I1014 13:07:59.678525 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 14 13:07:59.679302 master-1 kubenswrapper[4740]: I1014 13:07:59.679251 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc"] Oct 14 13:07:59.680893 master-1 kubenswrapper[4740]: I1014 13:07:59.680847 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 14 13:07:59.681155 master-1 kubenswrapper[4740]: I1014 13:07:59.681116 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp"] Oct 14 13:07:59.682846 master-1 kubenswrapper[4740]: I1014 13:07:59.682816 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q"] Oct 14 13:07:59.684183 master-1 kubenswrapper[4740]: I1014 13:07:59.684142 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt"] Oct 14 13:07:59.686026 master-1 kubenswrapper[4740]: I1014 13:07:59.685975 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-9npgz"] Oct 14 13:07:59.687304 master-1 kubenswrapper[4740]: I1014 13:07:59.687156 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-djh96"] Oct 14 13:07:59.688490 master-1 kubenswrapper[4740]: I1014 13:07:59.688447 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-nh2qc"] Oct 14 13:07:59.690649 master-1 kubenswrapper[4740]: I1014 13:07:59.690610 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-chrmm"] Oct 14 13:07:59.692188 master-1 kubenswrapper[4740]: I1014 13:07:59.692113 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l"] Oct 14 13:07:59.693550 master-1 kubenswrapper[4740]: I1014 13:07:59.693422 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc"] Oct 14 13:07:59.694804 master-1 kubenswrapper[4740]: I1014 13:07:59.694755 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl"] Oct 14 13:07:59.696126 master-1 kubenswrapper[4740]: I1014 13:07:59.696098 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l"] Oct 14 13:07:59.698119 master-1 kubenswrapper[4740]: I1014 13:07:59.698075 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-gldlr"] Oct 14 13:07:59.698599 master-1 kubenswrapper[4740]: I1014 13:07:59.698556 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.699436 master-1 kubenswrapper[4740]: I1014 13:07:59.699398 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t"] Oct 14 13:07:59.700868 master-1 kubenswrapper[4740]: I1014 13:07:59.700807 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-mgc7h"] Oct 14 13:07:59.717858 master-1 kubenswrapper[4740]: I1014 13:07:59.717791 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655ad1ce-582a-4728-8bfd-ca4164468de3-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.718030 master-1 kubenswrapper[4740]: I1014 13:07:59.717867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:07:59.718030 master-1 kubenswrapper[4740]: I1014 13:07:59.717909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6qv\" (UniqueName: \"kubernetes.io/projected/3d292fbb-b49c-4543-993b-738103c7419b-kube-api-access-kr6qv\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.718030 master-1 kubenswrapper[4740]: I1014 13:07:59.717943 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-config\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.718030 master-1 kubenswrapper[4740]: I1014 13:07:59.717980 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fghw9\" (UniqueName: \"kubernetes.io/projected/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-kube-api-access-fghw9\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.718030 master-1 kubenswrapper[4740]: I1014 13:07:59.718018 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a952fbc-3908-4e41-a914-9f63f47252e4-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.718574 master-1 kubenswrapper[4740]: I1014 13:07:59.718055 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ngr\" (UniqueName: \"kubernetes.io/projected/3a952fbc-3908-4e41-a914-9f63f47252e4-kube-api-access-h7ngr\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.718574 master-1 kubenswrapper[4740]: I1014 13:07:59.718093 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec50d087-259f-45c0-a15a-7fe949ae66dd-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.718574 master-1 kubenswrapper[4740]: I1014 13:07:59.718129 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqnwp\" (UniqueName: \"kubernetes.io/projected/910af03d-893a-443d-b6ed-fe21c26951f4-kube-api-access-kqnwp\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:07:59.718574 master-1 kubenswrapper[4740]: I1014 13:07:59.718164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzss\" (UniqueName: \"kubernetes.io/projected/f22c13e5-9b56-4f0c-a17a-677ba07226ff-kube-api-access-xfzss\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.718574 master-1 kubenswrapper[4740]: I1014 13:07:59.718201 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb787\" (UniqueName: \"kubernetes.io/projected/1d68f537-be68-4623-bded-e5d7fb5c3573-kube-api-access-nb787\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.718574 master-1 kubenswrapper[4740]: I1014 13:07:59.718266 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.719106 master-1 kubenswrapper[4740]: I1014 13:07:59.718454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec50d087-259f-45c0-a15a-7fe949ae66dd-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.719106 master-1 kubenswrapper[4740]: I1014 13:07:59.718981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655ad1ce-582a-4728-8bfd-ca4164468de3-trusted-ca\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.719106 master-1 kubenswrapper[4740]: I1014 13:07:59.718575 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 14 13:07:59.719106 master-1 kubenswrapper[4740]: I1014 13:07:59.719000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzc47\" (UniqueName: \"kubernetes.io/projected/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-kube-api-access-dzc47\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:07:59.719106 master-1 kubenswrapper[4740]: I1014 13:07:59.719074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9fs9\" (UniqueName: \"kubernetes.io/projected/2a106ff8-388a-4d30-8370-aad661eb4365-kube-api-access-z9fs9\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719128 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-client\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719182 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrw7t\" (UniqueName: \"kubernetes.io/projected/97b0a691-fe82-46b1-9f04-671aed7e10be-kube-api-access-qrw7t\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719281 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d68f537-be68-4623-bded-e5d7fb5c3573-config\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719336 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec50d087-259f-45c0-a15a-7fe949ae66dd-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-iptables-alerter-script\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxl25\" (UniqueName: \"kubernetes.io/projected/c4ca808a-394d-4a17-ac12-1df264c7ed92-kube-api-access-sxl25\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b51ef0bc-8b0e-4fab-b101-660ed408924c-images\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.719600 master-1 kubenswrapper[4740]: I1014 13:07:59.719558 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cznrk\" (UniqueName: \"kubernetes.io/projected/57526e49-7f51-4a66-8f48-0c485fc1e88f-kube-api-access-cznrk\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.720368 master-1 kubenswrapper[4740]: I1014 13:07:59.719617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1a35e1e-333f-480c-b1d6-059475700627-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.720368 master-1 kubenswrapper[4740]: I1014 13:07:59.719677 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f8b5ead9-7212-4a2f-8105-92d1c5384308-available-featuregates\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.720368 master-1 kubenswrapper[4740]: I1014 13:07:59.719784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/62ef5e24-de36-454a-a34c-e741a86a6f96-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.720648 master-1 kubenswrapper[4740]: I1014 13:07:59.720448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:07:59.720648 master-1 kubenswrapper[4740]: I1014 13:07:59.720496 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/016573fd-7804-461e-83d7-1c019298f7c6-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-7ff2l\" (UID: \"016573fd-7804-461e-83d7-1c019298f7c6\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:07:59.720648 master-1 kubenswrapper[4740]: I1014 13:07:59.720522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772f8774-25f4-4987-bd40-8f3adda97e8b-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.720648 master-1 kubenswrapper[4740]: I1014 13:07:59.720546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.720648 master-1 kubenswrapper[4740]: I1014 13:07:59.720577 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.720648 master-1 kubenswrapper[4740]: I1014 13:07:59.720601 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f8b5ead9-7212-4a2f-8105-92d1c5384308-available-featuregates\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: E1014 13:07:59.720670 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4ca808a-394d-4a17-ac12-1df264c7ed92-auth-proxy-config\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720708 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9jkb\" (UniqueName: \"kubernetes.io/projected/f8b5ead9-7212-4a2f-8105-92d1c5384308-kube-api-access-j9jkb\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720735 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d68f537-be68-4623-bded-e5d7fb5c3573-auth-proxy-config\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: E1014 13:07:59.720762 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.220740411 +0000 UTC m=+106.031029750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-resolv-conf\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d7cccd-3100-4c4f-9303-fc57993b465e-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.720992 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-profile-collector-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.721041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.721137 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7ck6\" (UniqueName: \"kubernetes.io/projected/ec085d84-4833-4e0b-9e6a-35b983a7059b-kube-api-access-l7ck6\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.721192 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-snapshots\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.721253 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-serving-cert\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.721351 master-1 kubenswrapper[4740]: I1014 13:07:59.721294 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a952fbc-3908-4e41-a914-9f63f47252e4-config\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721349 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d8hx\" (UniqueName: \"kubernetes.io/projected/ab511c1d-28e3-448a-86ec-cea21871fd26-kube-api-access-4d8hx\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721410 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22c13e5-9b56-4f0c-a17a-677ba07226ff-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-service-ca-bundle\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721484 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-host-slash\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b51ef0bc-8b0e-4fab-b101-660ed408924c-images\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721523 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51ef0bc-8b0e-4fab-b101-660ed408924c-config\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721549 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4ca808a-394d-4a17-ac12-1df264c7ed92-auth-proxy-config\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721599 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721637 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa5c762-a739-4cf4-929c-573bc5494b81-config\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98bm6\" (UniqueName: \"kubernetes.io/projected/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-kube-api-access-98bm6\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klhdd\" (UniqueName: \"kubernetes.io/projected/655ad1ce-582a-4728-8bfd-ca4164468de3-kube-api-access-klhdd\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz47q\" (UniqueName: \"kubernetes.io/projected/398ba6fd-0f8f-46af-b690-61a6eec9176b-kube-api-access-tz47q\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.722683 master-1 kubenswrapper[4740]: I1014 13:07:59.721753 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.721774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-config\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.721800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.721823 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kgk\" (UniqueName: \"kubernetes.io/projected/2fa5c762-a739-4cf4-929c-573bc5494b81-kube-api-access-d5kgk\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: E1014 13:07:59.721839 4740 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.721849 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmhg\" (UniqueName: \"kubernetes.io/projected/b51ef0bc-8b0e-4fab-b101-660ed408924c-kube-api-access-wlmhg\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.721910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/62ef5e24-de36-454a-a34c-e741a86a6f96-telemetry-config\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: E1014 13:07:59.721907 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls podName:a4ab71e1-9b1f-42ee-8abb-8f998e3cae74 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.221883569 +0000 UTC m=+106.032172938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-n87md" (UID: "a4ab71e1-9b1f-42ee-8abb-8f998e3cae74") : secret "control-plane-machine-set-operator-tls" not found Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.721972 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftg2g\" (UniqueName: \"kubernetes.io/projected/24d7cccd-3100-4c4f-9303-fc57993b465e-kube-api-access-ftg2g\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: E1014 13:07:59.722013 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.722016 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a2b886b-005d-4d02-a231-ddacf42775ea-serving-cert\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: E1014 13:07:59.722093 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.222068514 +0000 UTC m=+106.032357883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "performance-addon-operator-webhook-cert" not found Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.722124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-ca\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.722168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.724035 master-1 kubenswrapper[4740]: I1014 13:07:59.722200 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: E1014 13:07:59.722307 4740 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722336 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8b5ead9-7212-4a2f-8105-92d1c5384308-serving-cert\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: E1014 13:07:59.722354 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls podName:398ba6fd-0f8f-46af-b690-61a6eec9176b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.22233937 +0000 UTC m=+106.032628729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls") pod "ingress-operator-766ddf4575-xhdjt" (UID: "398ba6fd-0f8f-46af-b690-61a6eec9176b") : secret "metrics-tls" not found Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: E1014 13:07:59.722360 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722372 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab511c1d-28e3-448a-86ec-cea21871fd26-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: E1014 13:07:59.722437 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.222394442 +0000 UTC m=+106.032683781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "node-tuning-operator-tls" not found Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722460 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-images\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722519 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptnqq\" (UniqueName: \"kubernetes.io/projected/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-kube-api-access-ptnqq\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnc2f\" (UniqueName: \"kubernetes.io/projected/2a2b886b-005d-4d02-a231-ddacf42775ea-kube-api-access-tnc2f\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722623 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b0a691-fe82-46b1-9f04-671aed7e10be-serving-cert\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772f8774-25f4-4987-bd40-8f3adda97e8b-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.725314 master-1 kubenswrapper[4740]: I1014 13:07:59.722695 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772f8774-25f4-4987-bd40-8f3adda97e8b-config\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.722748 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.722820 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pg7b\" (UniqueName: \"kubernetes.io/projected/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-kube-api-access-6pg7b\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.722841 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.722874 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.222864284 +0000 UTC m=+106.033153623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-operator-tls" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.722875 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.722931 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.722960 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.722963 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.722983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.723017 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.222997107 +0000 UTC m=+106.033286456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.723047 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.723104 4740 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: I1014 13:07:59.723046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2tt\" (UniqueName: \"kubernetes.io/projected/7be129fe-d04d-4384-a0e9-76b3148a1f3e-kube-api-access-zk2tt\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.723113 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.223071749 +0000 UTC m=+106.033361088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:07:59.726566 master-1 kubenswrapper[4740]: E1014 13:07:59.723160 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: E1014 13:07:59.723179 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert podName:1fa31cdd-e051-4987-a1a2-814fc7445e6b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.223164401 +0000 UTC m=+106.033453740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-4cf2d" (UID: "1fa31cdd-e051-4987-a1a2-814fc7445e6b") : secret "cloud-credential-operator-serving-cert" not found Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-config\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: E1014 13:07:59.723275 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert podName:ab511c1d-28e3-448a-86ec-cea21871fd26 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.223216022 +0000 UTC m=+106.033505451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert") pod "cluster-autoscaler-operator-7ff449c7c5-nmpfk" (UID: "ab511c1d-28e3-448a-86ec-cea21871fd26") : secret "cluster-autoscaler-operator-cert" not found Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723394 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a35e1e-333f-480c-b1d6-059475700627-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723453 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dlkx\" (UniqueName: \"kubernetes.io/projected/b1a35e1e-333f-480c-b1d6-059475700627-kube-api-access-5dlkx\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-var-run-resolv-conf\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa5c762-a739-4cf4-929c-573bc5494b81-serving-cert\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbz8k\" (UniqueName: \"kubernetes.io/projected/016573fd-7804-461e-83d7-1c019298f7c6-kube-api-access-zbz8k\") pod \"cluster-storage-operator-56d4b95494-7ff2l\" (UID: \"016573fd-7804-461e-83d7-1c019298f7c6\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d7cccd-3100-4c4f-9303-fc57993b465e-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723732 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723789 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-config\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.727510 master-1 kubenswrapper[4740]: I1014 13:07:59.723844 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ppl\" (UniqueName: \"kubernetes.io/projected/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-kube-api-access-f4ppl\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.723901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f22c13e5-9b56-4f0c-a17a-677ba07226ff-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzx6\" (UniqueName: \"kubernetes.io/projected/db9c19df-41e6-4216-829f-dd2975ff5108-kube-api-access-ztzx6\") pod \"csi-snapshot-controller-operator-7ff96dd767-9htmf\" (UID: \"db9c19df-41e6-4216-829f-dd2975ff5108\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq44g\" (UniqueName: \"kubernetes.io/projected/01742ba1-f43b-4ff2-97d5-1a535e925a0f-kube-api-access-wq44g\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-config\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724197 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51ef0bc-8b0e-4fab-b101-660ed408924c-config\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724260 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbpgx\" (UniqueName: \"kubernetes.io/projected/62ef5e24-de36-454a-a34c-e741a86a6f96-kube-api-access-nbpgx\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724324 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt496\" (UniqueName: \"kubernetes.io/projected/1fa31cdd-e051-4987-a1a2-814fc7445e6b-kube-api-access-nt496\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724418 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724411 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-images\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724526 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-ca-bundle\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: E1014 13:07:59.724572 4740 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724651 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: E1014 13:07:59.724659 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls podName:b51ef0bc-8b0e-4fab-b101-660ed408924c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.224640757 +0000 UTC m=+106.034930096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-s66vj" (UID: "b51ef0bc-8b0e-4fab-b101-660ed408924c") : secret "machine-api-operator-tls" not found Oct 14 13:07:59.728394 master-1 kubenswrapper[4740]: I1014 13:07:59.724696 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b1a35e1e-333f-480c-b1d6-059475700627-trusted-ca\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ca808a-394d-4a17-ac12-1df264c7ed92-images\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724768 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724803 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/398ba6fd-0f8f-46af-b690-61a6eec9176b-trusted-ca\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/398ba6fd-0f8f-46af-b690-61a6eec9176b-bound-sa-token\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-config\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.724926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cco-trusted-ca\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: E1014 13:07:59.725051 4740 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: E1014 13:07:59.725091 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: E1014 13:07:59.725109 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls podName:b1a35e1e-333f-480c-b1d6-059475700627 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.225086717 +0000 UTC m=+106.035376096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-gspqw" (UID: "b1a35e1e-333f-480c-b1d6-059475700627") : secret "image-registry-operator-tls" not found Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: E1014 13:07:59.725170 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.225139799 +0000 UTC m=+106.035429208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.725380 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4ca808a-394d-4a17-ac12-1df264c7ed92-images\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.725452 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-config\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.726013 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab511c1d-28e3-448a-86ec-cea21871fd26-auth-proxy-config\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:07:59.729684 master-1 kubenswrapper[4740]: I1014 13:07:59.726342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/398ba6fd-0f8f-46af-b690-61a6eec9176b-trusted-ca\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:07:59.730615 master-1 kubenswrapper[4740]: I1014 13:07:59.729219 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8b5ead9-7212-4a2f-8105-92d1c5384308-serving-cert\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:07:59.730615 master-1 kubenswrapper[4740]: I1014 13:07:59.729984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-serving-cert\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:07:59.738873 master-1 kubenswrapper[4740]: I1014 13:07:59.738810 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 14 13:07:59.757631 master-1 kubenswrapper[4740]: I1014 13:07:59.757595 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.778800 master-1 kubenswrapper[4740]: I1014 13:07:59.778760 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 14 13:07:59.798328 master-1 kubenswrapper[4740]: I1014 13:07:59.798297 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 14 13:07:59.819056 master-1 kubenswrapper[4740]: I1014 13:07:59.818999 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.825847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ppl\" (UniqueName: \"kubernetes.io/projected/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-kube-api-access-f4ppl\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.825927 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f22c13e5-9b56-4f0c-a17a-677ba07226ff-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826025 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq44g\" (UniqueName: \"kubernetes.io/projected/01742ba1-f43b-4ff2-97d5-1a535e925a0f-kube-api-access-wq44g\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826072 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-ca-bundle\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826265 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826438 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6qv\" (UniqueName: \"kubernetes.io/projected/3d292fbb-b49c-4543-993b-738103c7419b-kube-api-access-kr6qv\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826483 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ngr\" (UniqueName: \"kubernetes.io/projected/3a952fbc-3908-4e41-a914-9f63f47252e4-kube-api-access-h7ngr\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826528 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec50d087-259f-45c0-a15a-7fe949ae66dd-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826570 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/f22c13e5-9b56-4f0c-a17a-677ba07226ff-operand-assets\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826569 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-config\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826615 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fghw9\" (UniqueName: \"kubernetes.io/projected/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-kube-api-access-fghw9\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a952fbc-3908-4e41-a914-9f63f47252e4-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.827350 master-1 kubenswrapper[4740]: I1014 13:07:59.826651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqnwp\" (UniqueName: \"kubernetes.io/projected/910af03d-893a-443d-b6ed-fe21c26951f4-kube-api-access-kqnwp\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.826671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzss\" (UniqueName: \"kubernetes.io/projected/f22c13e5-9b56-4f0c-a17a-677ba07226ff-kube-api-access-xfzss\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.826688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb787\" (UniqueName: \"kubernetes.io/projected/1d68f537-be68-4623-bded-e5d7fb5c3573-kube-api-access-nb787\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: E1014 13:07:59.826696 4740 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: E1014 13:07:59.826784 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls podName:1d68f537-be68-4623-bded-e5d7fb5c3573 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.326755778 +0000 UTC m=+106.137045147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls") pod "machine-approver-7876f99457-kpq7g" (UID: "1d68f537-be68-4623-bded-e5d7fb5c3573") : secret "machine-approver-tls" not found Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.826706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9fs9\" (UniqueName: \"kubernetes.io/projected/2a106ff8-388a-4d30-8370-aad661eb4365-kube-api-access-z9fs9\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.827019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-ca-bundle\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.827035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-client\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.827153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: E1014 13:07:59.827195 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: E1014 13:07:59.827204 4740 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: E1014 13:07:59.827250 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.327221139 +0000 UTC m=+106.137510468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.827192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec50d087-259f-45c0-a15a-7fe949ae66dd-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: E1014 13:07:59.827288 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls podName:910af03d-893a-443d-b6ed-fe21c26951f4 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.32726827 +0000 UTC m=+106.137557659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls") pod "dns-operator-7769d9677-nh2qc" (UID: "910af03d-893a-443d-b6ed-fe21c26951f4") : secret "metrics-tls" not found Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.827359 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrw7t\" (UniqueName: \"kubernetes.io/projected/97b0a691-fe82-46b1-9f04-671aed7e10be-kube-api-access-qrw7t\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.829093 master-1 kubenswrapper[4740]: I1014 13:07:59.827397 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d68f537-be68-4623-bded-e5d7fb5c3573-config\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827428 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec50d087-259f-45c0-a15a-7fe949ae66dd-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cznrk\" (UniqueName: \"kubernetes.io/projected/57526e49-7f51-4a66-8f48-0c485fc1e88f-kube-api-access-cznrk\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827496 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-iptables-alerter-script\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827667 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/016573fd-7804-461e-83d7-1c019298f7c6-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-7ff2l\" (UID: \"016573fd-7804-461e-83d7-1c019298f7c6\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772f8774-25f4-4987-bd40-8f3adda97e8b-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: E1014 13:07:59.827741 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: E1014 13:07:59.827772 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.327763973 +0000 UTC m=+106.138053302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827789 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d68f537-be68-4623-bded-e5d7fb5c3573-auth-proxy-config\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827820 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827860 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-resolv-conf\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827936 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d7cccd-3100-4c4f-9303-fc57993b465e-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827955 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7ck6\" (UniqueName: \"kubernetes.io/projected/ec085d84-4833-4e0b-9e6a-35b983a7059b-kube-api-access-l7ck6\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:07:59.829900 master-1 kubenswrapper[4740]: I1014 13:07:59.827976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828002 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-profile-collector-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-resolv-conf\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828038 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22c13e5-9b56-4f0c-a17a-677ba07226ff-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-service-ca-bundle\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-snapshots\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828273 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-serving-cert\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828330 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a952fbc-3908-4e41-a914-9f63f47252e4-config\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-config\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828402 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-host-slash\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa5c762-a739-4cf4-929c-573bc5494b81-config\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828636 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kgk\" (UniqueName: \"kubernetes.io/projected/2fa5c762-a739-4cf4-929c-573bc5494b81-kube-api-access-d5kgk\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828736 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-config\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: E1014 13:07:59.828748 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:07:59.830645 master-1 kubenswrapper[4740]: I1014 13:07:59.828808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a2b886b-005d-4d02-a231-ddacf42775ea-serving-cert\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.828824 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d68f537-be68-4623-bded-e5d7fb5c3573-config\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.828853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-host-slash\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.828863 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d68f537-be68-4623-bded-e5d7fb5c3573-auth-proxy-config\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.828860 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-ca\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.828897 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-service-ca-bundle\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: E1014 13:07:59.829002 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.328989862 +0000 UTC m=+106.139279191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829031 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftg2g\" (UniqueName: \"kubernetes.io/projected/24d7cccd-3100-4c4f-9303-fc57993b465e-kube-api-access-ftg2g\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829200 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-iptables-alerter-script\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: E1014 13:07:59.829395 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: E1014 13:07:59.829444 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.329427983 +0000 UTC m=+106.139717532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829439 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec50d087-259f-45c0-a15a-7fe949ae66dd-config\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b0a691-fe82-46b1-9f04-671aed7e10be-serving-cert\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.831499 master-1 kubenswrapper[4740]: I1014 13:07:59.829640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772f8774-25f4-4987-bd40-8f3adda97e8b-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.829840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-ca\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.829857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-snapshots\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830108 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-trusted-ca-bundle\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptnqq\" (UniqueName: \"kubernetes.io/projected/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-kube-api-access-ptnqq\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnc2f\" (UniqueName: \"kubernetes.io/projected/2a2b886b-005d-4d02-a231-ddacf42775ea-kube-api-access-tnc2f\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pg7b\" (UniqueName: \"kubernetes.io/projected/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-kube-api-access-6pg7b\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830497 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772f8774-25f4-4987-bd40-8f3adda97e8b-config\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a952fbc-3908-4e41-a914-9f63f47252e4-config\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830595 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa5c762-a739-4cf4-929c-573bc5494b81-config\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.830597 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-service-ca-bundle\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.831435 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-config\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.832021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a2b886b-005d-4d02-a231-ddacf42775ea-serving-cert\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.832532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/016573fd-7804-461e-83d7-1c019298f7c6-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-56d4b95494-7ff2l\" (UID: \"016573fd-7804-461e-83d7-1c019298f7c6\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.832526 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-serving-cert\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.832582 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-config\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.834187 master-1 kubenswrapper[4740]: I1014 13:07:59.832738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-var-run-resolv-conf\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.832803 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.832822 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-var-run-resolv-conf\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.832882 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: E1014 13:07:59.832931 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.832939 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa5c762-a739-4cf4-929c-573bc5494b81-serving-cert\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: E1014 13:07:59.833002 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:00.332976019 +0000 UTC m=+106.143265378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-trusted-ca-bundle\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833038 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbz8k\" (UniqueName: \"kubernetes.io/projected/016573fd-7804-461e-83d7-1c019298f7c6-kube-api-access-zbz8k\") pod \"cluster-storage-operator-56d4b95494-7ff2l\" (UID: \"016573fd-7804-461e-83d7-1c019298f7c6\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d7cccd-3100-4c4f-9303-fc57993b465e-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.832923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772f8774-25f4-4987-bd40-8f3adda97e8b-config\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833410 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-serving-cert\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b0a691-fe82-46b1-9f04-671aed7e10be-config\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833599 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/772f8774-25f4-4987-bd40-8f3adda97e8b-serving-cert\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.833677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-trusted-ca\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.834033 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-service-ca\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.834787 master-1 kubenswrapper[4740]: I1014 13:07:59.834401 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b0a691-fe82-46b1-9f04-671aed7e10be-serving-cert\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:07:59.835322 master-1 kubenswrapper[4740]: I1014 13:07:59.835018 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a2b886b-005d-4d02-a231-ddacf42775ea-etcd-client\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:07:59.835322 master-1 kubenswrapper[4740]: I1014 13:07:59.835196 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-profile-collector-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:07:59.836098 master-1 kubenswrapper[4740]: I1014 13:07:59.835396 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-profile-collector-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:07:59.836098 master-1 kubenswrapper[4740]: I1014 13:07:59.835674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/f22c13e5-9b56-4f0c-a17a-677ba07226ff-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:07:59.836098 master-1 kubenswrapper[4740]: I1014 13:07:59.835719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a952fbc-3908-4e41-a914-9f63f47252e4-serving-cert\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:07:59.837152 master-1 kubenswrapper[4740]: I1014 13:07:59.837102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa5c762-a739-4cf4-929c-573bc5494b81-serving-cert\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:07:59.838326 master-1 kubenswrapper[4740]: I1014 13:07:59.838283 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec50d087-259f-45c0-a15a-7fe949ae66dd-serving-cert\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:07:59.838435 master-1 kubenswrapper[4740]: I1014 13:07:59.838416 4740 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Oct 14 13:07:59.858618 master-1 kubenswrapper[4740]: I1014 13:07:59.858573 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Oct 14 13:07:59.879067 master-1 kubenswrapper[4740]: I1014 13:07:59.879026 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Oct 14 13:07:59.899484 master-1 kubenswrapper[4740]: I1014 13:07:59.899443 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 14 13:07:59.904383 master-1 kubenswrapper[4740]: I1014 13:07:59.904339 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d7cccd-3100-4c4f-9303-fc57993b465e-config\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.918787 master-1 kubenswrapper[4740]: I1014 13:07:59.918748 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 14 13:07:59.938455 master-1 kubenswrapper[4740]: I1014 13:07:59.938296 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 14 13:07:59.942520 master-1 kubenswrapper[4740]: I1014 13:07:59.942459 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24d7cccd-3100-4c4f-9303-fc57993b465e-serving-cert\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:07:59.942763 master-1 kubenswrapper[4740]: I1014 13:07:59.942718 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:07:59.958834 master-1 kubenswrapper[4740]: I1014 13:07:59.958668 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 14 13:08:00.023488 master-1 kubenswrapper[4740]: I1014 13:08:00.023423 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzc47\" (UniqueName: \"kubernetes.io/projected/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-kube-api-access-dzc47\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:00.043468 master-1 kubenswrapper[4740]: I1014 13:08:00.039613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxl25\" (UniqueName: \"kubernetes.io/projected/c4ca808a-394d-4a17-ac12-1df264c7ed92-kube-api-access-sxl25\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:00.062691 master-1 kubenswrapper[4740]: I1014 13:08:00.062613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1a35e1e-333f-480c-b1d6-059475700627-bound-sa-token\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:00.084038 master-1 kubenswrapper[4740]: I1014 13:08:00.083979 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9jkb\" (UniqueName: \"kubernetes.io/projected/f8b5ead9-7212-4a2f-8105-92d1c5384308-kube-api-access-j9jkb\") pod \"openshift-config-operator-55957b47d5-vtkr6\" (UID: \"f8b5ead9-7212-4a2f-8105-92d1c5384308\") " pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:08:00.094580 master-1 kubenswrapper[4740]: I1014 13:08:00.094536 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c-kube-api-access\") pod \"kube-apiserver-operator-68f5d95b74-bqdtw\" (UID: \"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:08:00.114346 master-1 kubenswrapper[4740]: I1014 13:08:00.114304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d8hx\" (UniqueName: \"kubernetes.io/projected/ab511c1d-28e3-448a-86ec-cea21871fd26-kube-api-access-4d8hx\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:00.145016 master-1 kubenswrapper[4740]: I1014 13:08:00.144923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98bm6\" (UniqueName: \"kubernetes.io/projected/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-kube-api-access-98bm6\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:00.163212 master-1 kubenswrapper[4740]: I1014 13:08:00.163147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz47q\" (UniqueName: \"kubernetes.io/projected/398ba6fd-0f8f-46af-b690-61a6eec9176b-kube-api-access-tz47q\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:00.184570 master-1 kubenswrapper[4740]: I1014 13:08:00.184502 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmhg\" (UniqueName: \"kubernetes.io/projected/b51ef0bc-8b0e-4fab-b101-660ed408924c-kube-api-access-wlmhg\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:00.196405 master-1 kubenswrapper[4740]: I1014 13:08:00.196352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klhdd\" (UniqueName: \"kubernetes.io/projected/655ad1ce-582a-4728-8bfd-ca4164468de3-kube-api-access-klhdd\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:00.216368 master-1 kubenswrapper[4740]: I1014 13:08:00.216305 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2tt\" (UniqueName: \"kubernetes.io/projected/7be129fe-d04d-4384-a0e9-76b3148a1f3e-kube-api-access-zk2tt\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:00.231910 master-1 kubenswrapper[4740]: I1014 13:08:00.231857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dlkx\" (UniqueName: \"kubernetes.io/projected/b1a35e1e-333f-480c-b1d6-059475700627-kube-api-access-5dlkx\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:00.239989 master-1 kubenswrapper[4740]: I1014 13:08:00.239932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:00.240072 master-1 kubenswrapper[4740]: I1014 13:08:00.240000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:00.240103 master-1 kubenswrapper[4740]: I1014 13:08:00.240065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:00.240359 master-1 kubenswrapper[4740]: E1014 13:08:00.240275 4740 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 14 13:08:00.240359 master-1 kubenswrapper[4740]: E1014 13:08:00.240367 4740 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 14 13:08:00.240449 master-1 kubenswrapper[4740]: I1014 13:08:00.240297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:00.240597 master-1 kubenswrapper[4740]: E1014 13:08:00.240411 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:00.240722 master-1 kubenswrapper[4740]: E1014 13:08:00.240452 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls podName:b1a35e1e-333f-480c-b1d6-059475700627 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.240428878 +0000 UTC m=+107.050718237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-gspqw" (UID: "b1a35e1e-333f-480c-b1d6-059475700627") : secret "image-registry-operator-tls" not found Oct 14 13:08:00.240779 master-1 kubenswrapper[4740]: E1014 13:08:00.240282 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:08:00.240807 master-1 kubenswrapper[4740]: E1014 13:08:00.240791 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls podName:b51ef0bc-8b0e-4fab-b101-660ed408924c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.240723166 +0000 UTC m=+107.051012535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-s66vj" (UID: "b51ef0bc-8b0e-4fab-b101-660ed408924c") : secret "machine-api-operator-tls" not found Oct 14 13:08:00.241049 master-1 kubenswrapper[4740]: E1014 13:08:00.241006 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.240974012 +0000 UTC m=+107.051263381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:00.241096 master-1 kubenswrapper[4740]: E1014 13:08:00.241067 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.241053214 +0000 UTC m=+107.051342573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:08:00.241130 master-1 kubenswrapper[4740]: I1014 13:08:00.241101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:00.241178 master-1 kubenswrapper[4740]: I1014 13:08:00.241154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:00.241282 master-1 kubenswrapper[4740]: I1014 13:08:00.241259 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:00.241327 master-1 kubenswrapper[4740]: I1014 13:08:00.241305 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:00.241419 master-1 kubenswrapper[4740]: I1014 13:08:00.241389 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:00.241448 master-1 kubenswrapper[4740]: E1014 13:08:00.241422 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:00.241448 master-1 kubenswrapper[4740]: E1014 13:08:00.241424 4740 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:00.241502 master-1 kubenswrapper[4740]: I1014 13:08:00.241474 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:00.241502 master-1 kubenswrapper[4740]: E1014 13:08:00.241494 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls podName:a4ab71e1-9b1f-42ee-8abb-8f998e3cae74 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.241477424 +0000 UTC m=+107.051766793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-n87md" (UID: "a4ab71e1-9b1f-42ee-8abb-8f998e3cae74") : secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:00.241564 master-1 kubenswrapper[4740]: I1014 13:08:00.241532 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:00.241618 master-1 kubenswrapper[4740]: I1014 13:08:00.241587 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:00.241684 master-1 kubenswrapper[4740]: I1014 13:08:00.241653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:00.241717 master-1 kubenswrapper[4740]: E1014 13:08:00.241539 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 14 13:08:00.241747 master-1 kubenswrapper[4740]: E1014 13:08:00.241727 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:08:00.241773 master-1 kubenswrapper[4740]: E1014 13:08:00.241548 4740 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:00.241773 master-1 kubenswrapper[4740]: E1014 13:08:00.241765 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.24174509 +0000 UTC m=+107.052034449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "node-tuning-operator-tls" not found Oct 14 13:08:00.241828 master-1 kubenswrapper[4740]: E1014 13:08:00.241580 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:00.241854 master-1 kubenswrapper[4740]: E1014 13:08:00.241797 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:00.241932 master-1 kubenswrapper[4740]: E1014 13:08:00.241606 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:00.241976 master-1 kubenswrapper[4740]: E1014 13:08:00.241656 4740 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:00.242006 master-1 kubenswrapper[4740]: E1014 13:08:00.241800 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.241786431 +0000 UTC m=+107.052075790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:08:00.242039 master-1 kubenswrapper[4740]: E1014 13:08:00.242027 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls podName:398ba6fd-0f8f-46af-b690-61a6eec9176b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.242005217 +0000 UTC m=+107.052294586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls") pod "ingress-operator-766ddf4575-xhdjt" (UID: "398ba6fd-0f8f-46af-b690-61a6eec9176b") : secret "metrics-tls" not found Oct 14 13:08:00.242082 master-1 kubenswrapper[4740]: E1014 13:08:00.242064 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.242050828 +0000 UTC m=+107.052340187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:00.242116 master-1 kubenswrapper[4740]: E1014 13:08:00.242100 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.242089129 +0000 UTC m=+107.052378488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:00.242151 master-1 kubenswrapper[4740]: E1014 13:08:00.242130 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.24211925 +0000 UTC m=+107.052408619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:00.242180 master-1 kubenswrapper[4740]: E1014 13:08:00.242161 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert podName:ab511c1d-28e3-448a-86ec-cea21871fd26 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.242150901 +0000 UTC m=+107.052440270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert") pod "cluster-autoscaler-operator-7ff449c7c5-nmpfk" (UID: "ab511c1d-28e3-448a-86ec-cea21871fd26") : secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:00.242211 master-1 kubenswrapper[4740]: E1014 13:08:00.242191 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert podName:1fa31cdd-e051-4987-a1a2-814fc7445e6b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.242180332 +0000 UTC m=+107.052469691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-4cf2d" (UID: "1fa31cdd-e051-4987-a1a2-814fc7445e6b") : secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:00.266392 master-1 kubenswrapper[4740]: I1014 13:08:00.266334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzx6\" (UniqueName: \"kubernetes.io/projected/db9c19df-41e6-4216-829f-dd2975ff5108-kube-api-access-ztzx6\") pod \"csi-snapshot-controller-operator-7ff96dd767-9htmf\" (UID: \"db9c19df-41e6-4216-829f-dd2975ff5108\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" Oct 14 13:08:00.279536 master-1 kubenswrapper[4740]: I1014 13:08:00.279467 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" Oct 14 13:08:00.281964 master-1 kubenswrapper[4740]: I1014 13:08:00.281190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt496\" (UniqueName: \"kubernetes.io/projected/1fa31cdd-e051-4987-a1a2-814fc7445e6b-kube-api-access-nt496\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:00.292581 master-1 kubenswrapper[4740]: I1014 13:08:00.292492 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbpgx\" (UniqueName: \"kubernetes.io/projected/62ef5e24-de36-454a-a34c-e741a86a6f96-kube-api-access-nbpgx\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:00.319154 master-1 kubenswrapper[4740]: I1014 13:08:00.319078 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/398ba6fd-0f8f-46af-b690-61a6eec9176b-bound-sa-token\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:00.342986 master-1 kubenswrapper[4740]: I1014 13:08:00.342919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:00.342986 master-1 kubenswrapper[4740]: I1014 13:08:00.343015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:00.343373 master-1 kubenswrapper[4740]: I1014 13:08:00.343108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:00.343373 master-1 kubenswrapper[4740]: I1014 13:08:00.343185 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:00.343373 master-1 kubenswrapper[4740]: E1014 13:08:00.343287 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:08:00.343539 master-1 kubenswrapper[4740]: E1014 13:08:00.343369 4740 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:00.343539 master-1 kubenswrapper[4740]: I1014 13:08:00.343376 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:00.343539 master-1 kubenswrapper[4740]: E1014 13:08:00.343433 4740 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 14 13:08:00.343539 master-1 kubenswrapper[4740]: E1014 13:08:00.343504 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:00.343539 master-1 kubenswrapper[4740]: E1014 13:08:00.343513 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:00.343539 master-1 kubenswrapper[4740]: E1014 13:08:00.343447 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.343407541 +0000 UTC m=+107.153696910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:08:00.343884 master-1 kubenswrapper[4740]: E1014 13:08:00.343575 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.343551184 +0000 UTC m=+107.153840543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:08:00.343884 master-1 kubenswrapper[4740]: E1014 13:08:00.343617 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.343586755 +0000 UTC m=+107.153876124 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:08:00.343884 master-1 kubenswrapper[4740]: E1014 13:08:00.343687 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls podName:910af03d-893a-443d-b6ed-fe21c26951f4 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.343673367 +0000 UTC m=+107.153962746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls") pod "dns-operator-7769d9677-nh2qc" (UID: "910af03d-893a-443d-b6ed-fe21c26951f4") : secret "metrics-tls" not found Oct 14 13:08:00.343884 master-1 kubenswrapper[4740]: I1014 13:08:00.343832 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:00.343884 master-1 kubenswrapper[4740]: E1014 13:08:00.343875 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls podName:1d68f537-be68-4623-bded-e5d7fb5c3573 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.343840121 +0000 UTC m=+107.154129490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls") pod "machine-approver-7876f99457-kpq7g" (UID: "1d68f537-be68-4623-bded-e5d7fb5c3573") : secret "machine-approver-tls" not found Oct 14 13:08:00.344168 master-1 kubenswrapper[4740]: E1014 13:08:00.343951 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:08:00.344168 master-1 kubenswrapper[4740]: E1014 13:08:00.343995 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.343981094 +0000 UTC m=+107.154270463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:08:00.344168 master-1 kubenswrapper[4740]: I1014 13:08:00.343982 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:00.344168 master-1 kubenswrapper[4740]: E1014 13:08:00.344063 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:08:00.344168 master-1 kubenswrapper[4740]: E1014 13:08:00.344109 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:01.344095667 +0000 UTC m=+107.154385026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:08:00.344788 master-1 kubenswrapper[4740]: I1014 13:08:00.344716 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ppl\" (UniqueName: \"kubernetes.io/projected/63a7ff79-3d66-457a-bb4a-dc851ca9d4e8-kube-api-access-f4ppl\") pod \"insights-operator-7dcf5bd85b-chrmm\" (UID: \"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8\") " pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:08:00.353680 master-1 kubenswrapper[4740]: I1014 13:08:00.353601 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:08:00.366727 master-1 kubenswrapper[4740]: I1014 13:08:00.366668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9fs9\" (UniqueName: \"kubernetes.io/projected/2a106ff8-388a-4d30-8370-aad661eb4365-kube-api-access-z9fs9\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:00.388260 master-1 kubenswrapper[4740]: I1014 13:08:00.387908 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fghw9\" (UniqueName: \"kubernetes.io/projected/f4f3c22a-c0cd-4727-bfb4-9f92302eb13f-kube-api-access-fghw9\") pod \"openshift-apiserver-operator-7d88655794-dbtvc\" (UID: \"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:08:00.409166 master-1 kubenswrapper[4740]: I1014 13:08:00.409119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6qv\" (UniqueName: \"kubernetes.io/projected/3d292fbb-b49c-4543-993b-738103c7419b-kube-api-access-kr6qv\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:00.424402 master-1 kubenswrapper[4740]: I1014 13:08:00.424363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ngr\" (UniqueName: \"kubernetes.io/projected/3a952fbc-3908-4e41-a914-9f63f47252e4-kube-api-access-h7ngr\") pod \"openshift-controller-manager-operator-5745565d84-5l45t\" (UID: \"3a952fbc-3908-4e41-a914-9f63f47252e4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:08:00.427313 master-1 kubenswrapper[4740]: I1014 13:08:00.427197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" Oct 14 13:08:00.446092 master-1 kubenswrapper[4740]: I1014 13:08:00.446032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb787\" (UniqueName: \"kubernetes.io/projected/1d68f537-be68-4623-bded-e5d7fb5c3573-kube-api-access-nb787\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:00.465381 master-1 kubenswrapper[4740]: I1014 13:08:00.465341 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzss\" (UniqueName: \"kubernetes.io/projected/f22c13e5-9b56-4f0c-a17a-677ba07226ff-kube-api-access-xfzss\") pod \"cluster-olm-operator-77b56b6f4f-prtfl\" (UID: \"f22c13e5-9b56-4f0c-a17a-677ba07226ff\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:08:00.482917 master-1 kubenswrapper[4740]: I1014 13:08:00.482867 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" Oct 14 13:08:00.484499 master-1 kubenswrapper[4740]: I1014 13:08:00.484462 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq44g\" (UniqueName: \"kubernetes.io/projected/01742ba1-f43b-4ff2-97d5-1a535e925a0f-kube-api-access-wq44g\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:00.501478 master-1 kubenswrapper[4740]: I1014 13:08:00.501426 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" Oct 14 13:08:00.517015 master-1 kubenswrapper[4740]: I1014 13:08:00.516941 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec50d087-259f-45c0-a15a-7fe949ae66dd-kube-api-access\") pod \"openshift-kube-scheduler-operator-766d6b44f6-gtvcp\" (UID: \"ec50d087-259f-45c0-a15a-7fe949ae66dd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:08:00.557142 master-1 kubenswrapper[4740]: I1014 13:08:00.556799 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" Oct 14 13:08:00.557366 master-1 kubenswrapper[4740]: I1014 13:08:00.557270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" Oct 14 13:08:00.560257 master-1 kubenswrapper[4740]: I1014 13:08:00.560164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrw7t\" (UniqueName: \"kubernetes.io/projected/97b0a691-fe82-46b1-9f04-671aed7e10be-kube-api-access-qrw7t\") pod \"authentication-operator-66df44bc95-gldlr\" (UID: \"97b0a691-fe82-46b1-9f04-671aed7e10be\") " pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:08:00.561464 master-1 kubenswrapper[4740]: I1014 13:08:00.561416 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqnwp\" (UniqueName: \"kubernetes.io/projected/910af03d-893a-443d-b6ed-fe21c26951f4-kube-api-access-kqnwp\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:00.563338 master-1 kubenswrapper[4740]: I1014 13:08:00.563313 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw"] Oct 14 13:08:00.564915 master-1 kubenswrapper[4740]: I1014 13:08:00.564761 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cznrk\" (UniqueName: \"kubernetes.io/projected/57526e49-7f51-4a66-8f48-0c485fc1e88f-kube-api-access-cznrk\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:00.578325 master-1 kubenswrapper[4740]: I1014 13:08:00.578289 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6"] Oct 14 13:08:00.592289 master-1 kubenswrapper[4740]: I1014 13:08:00.592247 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772f8774-25f4-4987-bd40-8f3adda97e8b-kube-api-access\") pod \"kube-controller-manager-operator-5d85974df9-ppzvt\" (UID: \"772f8774-25f4-4987-bd40-8f3adda97e8b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:08:00.605315 master-1 kubenswrapper[4740]: I1014 13:08:00.604646 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7ck6\" (UniqueName: \"kubernetes.io/projected/ec085d84-4833-4e0b-9e6a-35b983a7059b-kube-api-access-l7ck6\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:00.624018 master-1 kubenswrapper[4740]: I1014 13:08:00.623967 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftg2g\" (UniqueName: \"kubernetes.io/projected/24d7cccd-3100-4c4f-9303-fc57993b465e-kube-api-access-ftg2g\") pod \"kube-storage-version-migrator-operator-dcfdffd74-ckmcc\" (UID: \"24d7cccd-3100-4c4f-9303-fc57993b465e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:08:00.642855 master-1 kubenswrapper[4740]: I1014 13:08:00.642815 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kgk\" (UniqueName: \"kubernetes.io/projected/2fa5c762-a739-4cf4-929c-573bc5494b81-kube-api-access-d5kgk\") pod \"service-ca-operator-568c655666-t6c8q\" (UID: \"2fa5c762-a739-4cf4-929c-573bc5494b81\") " pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:08:00.667698 master-1 kubenswrapper[4740]: I1014 13:08:00.667656 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnc2f\" (UniqueName: \"kubernetes.io/projected/2a2b886b-005d-4d02-a231-ddacf42775ea-kube-api-access-tnc2f\") pod \"etcd-operator-6bddf7d79-dtp9l\" (UID: \"2a2b886b-005d-4d02-a231-ddacf42775ea\") " pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:08:00.669892 master-1 kubenswrapper[4740]: I1014 13:08:00.669846 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-7dcf5bd85b-chrmm"] Oct 14 13:08:00.676382 master-1 kubenswrapper[4740]: W1014 13:08:00.676339 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63a7ff79_3d66_457a_bb4a_dc851ca9d4e8.slice/crio-bfdb815eb674c8452b7894ae4670f6d615ad9719249fd331c70b4fdea171640f WatchSource:0}: Error finding container bfdb815eb674c8452b7894ae4670f6d615ad9719249fd331c70b4fdea171640f: Status 404 returned error can't find the container with id bfdb815eb674c8452b7894ae4670f6d615ad9719249fd331c70b4fdea171640f Oct 14 13:08:00.679284 master-1 kubenswrapper[4740]: I1014 13:08:00.679222 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptnqq\" (UniqueName: \"kubernetes.io/projected/d25ed7db-e690-44d5-a1a4-ed29b8efeed1-kube-api-access-ptnqq\") pod \"iptables-alerter-m6qfh\" (UID: \"d25ed7db-e690-44d5-a1a4-ed29b8efeed1\") " pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:08:00.701373 master-1 kubenswrapper[4740]: I1014 13:08:00.701334 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" Oct 14 13:08:00.701636 master-1 kubenswrapper[4740]: I1014 13:08:00.701598 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl"] Oct 14 13:08:00.704201 master-1 kubenswrapper[4740]: I1014 13:08:00.704174 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pg7b\" (UniqueName: \"kubernetes.io/projected/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-kube-api-access-6pg7b\") pod \"assisted-installer-controller-mzrkb\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:08:00.710658 master-1 kubenswrapper[4740]: I1014 13:08:00.709595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" Oct 14 13:08:00.715977 master-1 kubenswrapper[4740]: I1014 13:08:00.715939 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" Oct 14 13:08:00.718827 master-1 kubenswrapper[4740]: I1014 13:08:00.718752 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 14 13:08:00.725647 master-1 kubenswrapper[4740]: I1014 13:08:00.725603 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbz8k\" (UniqueName: \"kubernetes.io/projected/016573fd-7804-461e-83d7-1c019298f7c6-kube-api-access-zbz8k\") pod \"cluster-storage-operator-56d4b95494-7ff2l\" (UID: \"016573fd-7804-461e-83d7-1c019298f7c6\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:08:00.729791 master-1 kubenswrapper[4740]: I1014 13:08:00.729738 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc"] Oct 14 13:08:00.736446 master-1 kubenswrapper[4740]: W1014 13:08:00.736395 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f3c22a_c0cd_4727_bfb4_9f92302eb13f.slice/crio-feb39c65d06855370bb788237e7c3e752d3d1e6005d90732bb07b839b223d748 WatchSource:0}: Error finding container feb39c65d06855370bb788237e7c3e752d3d1e6005d90732bb07b839b223d748: Status 404 returned error can't find the container with id feb39c65d06855370bb788237e7c3e752d3d1e6005d90732bb07b839b223d748 Oct 14 13:08:00.758412 master-1 kubenswrapper[4740]: I1014 13:08:00.758357 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf"] Oct 14 13:08:00.768245 master-1 kubenswrapper[4740]: W1014 13:08:00.768176 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb9c19df_41e6_4216_829f_dd2975ff5108.slice/crio-6e2008eff672e21027fa818b2a72747ec32688e9f98ee50bbb80dd5f21a53087 WatchSource:0}: Error finding container 6e2008eff672e21027fa818b2a72747ec32688e9f98ee50bbb80dd5f21a53087: Status 404 returned error can't find the container with id 6e2008eff672e21027fa818b2a72747ec32688e9f98ee50bbb80dd5f21a53087 Oct 14 13:08:00.773817 master-1 kubenswrapper[4740]: I1014 13:08:00.772900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t"] Oct 14 13:08:00.792741 master-1 kubenswrapper[4740]: I1014 13:08:00.792681 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" Oct 14 13:08:00.820048 master-1 kubenswrapper[4740]: I1014 13:08:00.819955 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" Oct 14 13:08:00.829992 master-1 kubenswrapper[4740]: I1014 13:08:00.829933 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-m6qfh" Oct 14 13:08:00.844892 master-1 kubenswrapper[4740]: W1014 13:08:00.844839 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd25ed7db_e690_44d5_a1a4_ed29b8efeed1.slice/crio-d59ef9effcf7ffa9a72d839a343ea4c790dcac4e0781e809b962934369910979 WatchSource:0}: Error finding container d59ef9effcf7ffa9a72d839a343ea4c790dcac4e0781e809b962934369910979: Status 404 returned error can't find the container with id d59ef9effcf7ffa9a72d839a343ea4c790dcac4e0781e809b962934369910979 Oct 14 13:08:00.873146 master-1 kubenswrapper[4740]: I1014 13:08:00.873010 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt"] Oct 14 13:08:00.891632 master-1 kubenswrapper[4740]: I1014 13:08:00.890186 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:08:00.897178 master-1 kubenswrapper[4740]: I1014 13:08:00.897139 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q"] Oct 14 13:08:00.903103 master-1 kubenswrapper[4740]: I1014 13:08:00.903068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" Oct 14 13:08:00.904066 master-1 kubenswrapper[4740]: W1014 13:08:00.904026 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb13eb5_2870_4a31_a2b7_1a4e3b02bb67.slice/crio-0d60fb7e8da5e1cc5fc41915af909947121dca8b6f9d069bebefd95845d95026 WatchSource:0}: Error finding container 0d60fb7e8da5e1cc5fc41915af909947121dca8b6f9d069bebefd95845d95026: Status 404 returned error can't find the container with id 0d60fb7e8da5e1cc5fc41915af909947121dca8b6f9d069bebefd95845d95026 Oct 14 13:08:00.907810 master-1 kubenswrapper[4740]: W1014 13:08:00.905824 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa5c762_a739_4cf4_929c_573bc5494b81.slice/crio-296cbec41f6c58dbb035760d4ef30c22f26eafabc54d933c13a5849534170cca WatchSource:0}: Error finding container 296cbec41f6c58dbb035760d4ef30c22f26eafabc54d933c13a5849534170cca: Status 404 returned error can't find the container with id 296cbec41f6c58dbb035760d4ef30c22f26eafabc54d933c13a5849534170cca Oct 14 13:08:00.917753 master-1 kubenswrapper[4740]: I1014 13:08:00.917631 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp"] Oct 14 13:08:00.930372 master-1 kubenswrapper[4740]: W1014 13:08:00.930334 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec50d087_259f_45c0_a15a_7fe949ae66dd.slice/crio-73d3436d96361144c5486fb274053aa543c601de3285b9d8c03700b672dd1024 WatchSource:0}: Error finding container 73d3436d96361144c5486fb274053aa543c601de3285b9d8c03700b672dd1024: Status 404 returned error can't find the container with id 73d3436d96361144c5486fb274053aa543c601de3285b9d8c03700b672dd1024 Oct 14 13:08:00.933733 master-1 kubenswrapper[4740]: E1014 13:08:00.933675 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.13,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-766d6b44f6-gtvcp_openshift-kube-scheduler-operator(ec50d087-259f-45c0-a15a-7fe949ae66dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:08:00.935705 master-1 kubenswrapper[4740]: E1014 13:08:00.935658 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" podUID="ec50d087-259f-45c0-a15a-7fe949ae66dd" Oct 14 13:08:00.943583 master-1 kubenswrapper[4740]: I1014 13:08:00.943548 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:08:00.958067 master-1 kubenswrapper[4740]: I1014 13:08:00.958039 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 14 13:08:00.978758 master-1 kubenswrapper[4740]: I1014 13:08:00.978707 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 14 13:08:00.982962 master-1 kubenswrapper[4740]: I1014 13:08:00.982909 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l"] Oct 14 13:08:00.988331 master-1 kubenswrapper[4740]: I1014 13:08:00.988294 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" Oct 14 13:08:00.996910 master-1 kubenswrapper[4740]: I1014 13:08:00.996873 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-66df44bc95-gldlr"] Oct 14 13:08:01.002027 master-1 kubenswrapper[4740]: W1014 13:08:01.001980 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97b0a691_fe82_46b1_9f04_671aed7e10be.slice/crio-795683c7f369eab31210a0effd2c2021747c0a3e0fd62004633d701e3dc74c6f WatchSource:0}: Error finding container 795683c7f369eab31210a0effd2c2021747c0a3e0fd62004633d701e3dc74c6f: Status 404 returned error can't find the container with id 795683c7f369eab31210a0effd2c2021747c0a3e0fd62004633d701e3dc74c6f Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: E1014 13:08:01.004410 4740 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: echo "Copying system trust bundle" Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: fi Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.25_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrw7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-66df44bc95-gldlr_openshift-authentication-operator(97b0a691-fe82-46b1-9f04-671aed7e10be): ErrImagePull: pull QPS exceeded Oct 14 13:08:01.004442 master-1 kubenswrapper[4740]: > logger="UnhandledError" Oct 14 13:08:01.005843 master-1 kubenswrapper[4740]: E1014 13:08:01.005804 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" podUID="97b0a691-fe82-46b1-9f04-671aed7e10be" Oct 14 13:08:01.074661 master-1 kubenswrapper[4740]: I1014 13:08:01.074463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc"] Oct 14 13:08:01.081095 master-1 kubenswrapper[4740]: E1014 13:08:01.081025 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.25,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftg2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-dcfdffd74-ckmcc_openshift-kube-storage-version-migrator-operator(24d7cccd-3100-4c4f-9303-fc57993b465e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:08:01.082357 master-1 kubenswrapper[4740]: E1014 13:08:01.082318 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" podUID="24d7cccd-3100-4c4f-9303-fc57993b465e" Oct 14 13:08:01.162257 master-1 kubenswrapper[4740]: I1014 13:08:01.162186 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l"] Oct 14 13:08:01.171945 master-1 kubenswrapper[4740]: W1014 13:08:01.171896 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016573fd_7804_461e_83d7_1c019298f7c6.slice/crio-a6697dc49328a78cd6d32fa765b43cdd3cc7d3aaf6f9a1cc6bbde05d3d6552e0 WatchSource:0}: Error finding container a6697dc49328a78cd6d32fa765b43cdd3cc7d3aaf6f9a1cc6bbde05d3d6552e0: Status 404 returned error can't find the container with id a6697dc49328a78cd6d32fa765b43cdd3cc7d3aaf6f9a1cc6bbde05d3d6552e0 Oct 14 13:08:01.267033 master-1 kubenswrapper[4740]: I1014 13:08:01.266929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:01.267258 master-1 kubenswrapper[4740]: I1014 13:08:01.267097 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:01.267258 master-1 kubenswrapper[4740]: E1014 13:08:01.267119 4740 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:01.267258 master-1 kubenswrapper[4740]: E1014 13:08:01.267200 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls podName:398ba6fd-0f8f-46af-b690-61a6eec9176b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.267168445 +0000 UTC m=+109.077457774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls") pod "ingress-operator-766ddf4575-xhdjt" (UID: "398ba6fd-0f8f-46af-b690-61a6eec9176b") : secret "metrics-tls" not found Oct 14 13:08:01.267407 master-1 kubenswrapper[4740]: I1014 13:08:01.267291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:01.267407 master-1 kubenswrapper[4740]: I1014 13:08:01.267369 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:01.267484 master-1 kubenswrapper[4740]: I1014 13:08:01.267406 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:01.267523 master-1 kubenswrapper[4740]: I1014 13:08:01.267481 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267504 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267665 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267696 4740 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267698 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: I1014 13:08:01.267559 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267778 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267659 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267864 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.267688307 +0000 UTC m=+109.077977666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "node-tuning-operator-tls" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267944 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert podName:1fa31cdd-e051-4987-a1a2-814fc7445e6b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.267891242 +0000 UTC m=+109.078180621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-4cf2d" (UID: "1fa31cdd-e051-4987-a1a2-814fc7445e6b") : secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.267974 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.267961173 +0000 UTC m=+109.078250532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:01.268015 master-1 kubenswrapper[4740]: E1014 13:08:01.268003 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert podName:ab511c1d-28e3-448a-86ec-cea21871fd26 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.267988404 +0000 UTC m=+109.078277763 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert") pod "cluster-autoscaler-operator-7ff449c7c5-nmpfk" (UID: "ab511c1d-28e3-448a-86ec-cea21871fd26") : secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:01.268460 master-1 kubenswrapper[4740]: E1014 13:08:01.268193 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.268142048 +0000 UTC m=+109.078431517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:08:01.268460 master-1 kubenswrapper[4740]: E1014 13:08:01.268290 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.268266181 +0000 UTC m=+109.078555770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:01.268460 master-1 kubenswrapper[4740]: I1014 13:08:01.268403 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:01.268576 master-1 kubenswrapper[4740]: I1014 13:08:01.268478 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:01.268620 master-1 kubenswrapper[4740]: I1014 13:08:01.268564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:01.268734 master-1 kubenswrapper[4740]: E1014 13:08:01.268704 4740 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 14 13:08:01.268778 master-1 kubenswrapper[4740]: I1014 13:08:01.268760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:01.268817 master-1 kubenswrapper[4740]: E1014 13:08:01.268776 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls podName:b51ef0bc-8b0e-4fab-b101-660ed408924c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.268751413 +0000 UTC m=+109.079040772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-s66vj" (UID: "b51ef0bc-8b0e-4fab-b101-660ed408924c") : secret "machine-api-operator-tls" not found Oct 14 13:08:01.268883 master-1 kubenswrapper[4740]: E1014 13:08:01.268859 4740 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 14 13:08:01.268922 master-1 kubenswrapper[4740]: E1014 13:08:01.268871 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:08:01.268922 master-1 kubenswrapper[4740]: E1014 13:08:01.268911 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls podName:b1a35e1e-333f-480c-b1d6-059475700627 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.268897497 +0000 UTC m=+109.079186866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-gspqw" (UID: "b1a35e1e-333f-480c-b1d6-059475700627") : secret "image-registry-operator-tls" not found Oct 14 13:08:01.269057 master-1 kubenswrapper[4740]: I1014 13:08:01.268912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:01.269057 master-1 kubenswrapper[4740]: E1014 13:08:01.268940 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.268924237 +0000 UTC m=+109.079213596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:08:01.269057 master-1 kubenswrapper[4740]: E1014 13:08:01.269002 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:01.269057 master-1 kubenswrapper[4740]: E1014 13:08:01.269025 4740 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:01.269057 master-1 kubenswrapper[4740]: E1014 13:08:01.269045 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.26903052 +0000 UTC m=+109.079319889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:01.269057 master-1 kubenswrapper[4740]: I1014 13:08:01.269026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:01.269307 master-1 kubenswrapper[4740]: E1014 13:08:01.269077 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls podName:a4ab71e1-9b1f-42ee-8abb-8f998e3cae74 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.269064321 +0000 UTC m=+109.079353680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-n87md" (UID: "a4ab71e1-9b1f-42ee-8abb-8f998e3cae74") : secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:01.269307 master-1 kubenswrapper[4740]: E1014 13:08:01.269187 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:01.269381 master-1 kubenswrapper[4740]: E1014 13:08:01.269311 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.269288646 +0000 UTC m=+109.079578295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:01.329960 master-1 kubenswrapper[4740]: I1014 13:08:01.329693 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" event={"ID":"2fa5c762-a739-4cf4-929c-573bc5494b81","Type":"ContainerStarted","Data":"296cbec41f6c58dbb035760d4ef30c22f26eafabc54d933c13a5849534170cca"} Oct 14 13:08:01.331214 master-1 kubenswrapper[4740]: I1014 13:08:01.330935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" event={"ID":"24d7cccd-3100-4c4f-9303-fc57993b465e","Type":"ContainerStarted","Data":"bccd83a7f60de59fb99df5cf93ec1b450db7cee654695d579f8a27ec0e818cc3"} Oct 14 13:08:01.335787 master-1 kubenswrapper[4740]: I1014 13:08:01.334575 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" event={"ID":"772f8774-25f4-4987-bd40-8f3adda97e8b","Type":"ContainerStarted","Data":"bead2ea7d25d883f8e6578eacbb0e81d56099ab202128e210a8930ce294e0e8a"} Oct 14 13:08:01.335787 master-1 kubenswrapper[4740]: E1014 13:08:01.335032 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34\\\"\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" podUID="24d7cccd-3100-4c4f-9303-fc57993b465e" Oct 14 13:08:01.336441 master-1 kubenswrapper[4740]: I1014 13:08:01.336406 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" event={"ID":"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8","Type":"ContainerStarted","Data":"bfdb815eb674c8452b7894ae4670f6d615ad9719249fd331c70b4fdea171640f"} Oct 14 13:08:01.340196 master-1 kubenswrapper[4740]: I1014 13:08:01.340128 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-m6qfh" event={"ID":"d25ed7db-e690-44d5-a1a4-ed29b8efeed1","Type":"ContainerStarted","Data":"d59ef9effcf7ffa9a72d839a343ea4c790dcac4e0781e809b962934369910979"} Oct 14 13:08:01.341680 master-1 kubenswrapper[4740]: I1014 13:08:01.341590 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" event={"ID":"ec50d087-259f-45c0-a15a-7fe949ae66dd","Type":"ContainerStarted","Data":"73d3436d96361144c5486fb274053aa543c601de3285b9d8c03700b672dd1024"} Oct 14 13:08:01.343219 master-1 kubenswrapper[4740]: E1014 13:08:01.343107 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642\\\"\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" podUID="ec50d087-259f-45c0-a15a-7fe949ae66dd" Oct 14 13:08:01.343512 master-1 kubenswrapper[4740]: I1014 13:08:01.343445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mzrkb" event={"ID":"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67","Type":"ContainerStarted","Data":"0d60fb7e8da5e1cc5fc41915af909947121dca8b6f9d069bebefd95845d95026"} Oct 14 13:08:01.345266 master-1 kubenswrapper[4740]: I1014 13:08:01.345200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" event={"ID":"f22c13e5-9b56-4f0c-a17a-677ba07226ff","Type":"ContainerStarted","Data":"e10fd799bac6a2f6f30df96a509c046c291084cb699c094d4bf353671d799998"} Oct 14 13:08:01.347174 master-1 kubenswrapper[4740]: I1014 13:08:01.347093 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" event={"ID":"97b0a691-fe82-46b1-9f04-671aed7e10be","Type":"ContainerStarted","Data":"795683c7f369eab31210a0effd2c2021747c0a3e0fd62004633d701e3dc74c6f"} Oct 14 13:08:01.348892 master-1 kubenswrapper[4740]: I1014 13:08:01.348838 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" event={"ID":"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c","Type":"ContainerStarted","Data":"cf8c315ee7235a066bb6cd2e97f262aaddf87551824d0dfc5cde8807e89bb53b"} Oct 14 13:08:01.349133 master-1 kubenswrapper[4740]: E1014 13:08:01.349098 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d\\\"\"" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" podUID="97b0a691-fe82-46b1-9f04-671aed7e10be" Oct 14 13:08:01.350083 master-1 kubenswrapper[4740]: I1014 13:08:01.350037 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" event={"ID":"016573fd-7804-461e-83d7-1c019298f7c6","Type":"ContainerStarted","Data":"a6697dc49328a78cd6d32fa765b43cdd3cc7d3aaf6f9a1cc6bbde05d3d6552e0"} Oct 14 13:08:01.351450 master-1 kubenswrapper[4740]: I1014 13:08:01.351402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" event={"ID":"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f","Type":"ContainerStarted","Data":"feb39c65d06855370bb788237e7c3e752d3d1e6005d90732bb07b839b223d748"} Oct 14 13:08:01.352681 master-1 kubenswrapper[4740]: I1014 13:08:01.352569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" event={"ID":"f8b5ead9-7212-4a2f-8105-92d1c5384308","Type":"ContainerStarted","Data":"622e938552b3086b104c4c34521740891630cbb5d565244296fa5eb96809bb35"} Oct 14 13:08:01.353979 master-1 kubenswrapper[4740]: I1014 13:08:01.353933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" event={"ID":"3a952fbc-3908-4e41-a914-9f63f47252e4","Type":"ContainerStarted","Data":"7569bb6b3f9ebc8f74f518018932e081ae8f3df361ef868f42fca1719f08ff3f"} Oct 14 13:08:01.355115 master-1 kubenswrapper[4740]: I1014 13:08:01.355069 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" event={"ID":"2a2b886b-005d-4d02-a231-ddacf42775ea","Type":"ContainerStarted","Data":"21ebfcd09e9914c6541d42952ae8284517bc01cb2e08a7b414b254cfcd583a40"} Oct 14 13:08:01.356181 master-1 kubenswrapper[4740]: I1014 13:08:01.356136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" event={"ID":"db9c19df-41e6-4216-829f-dd2975ff5108","Type":"ContainerStarted","Data":"6e2008eff672e21027fa818b2a72747ec32688e9f98ee50bbb80dd5f21a53087"} Oct 14 13:08:01.369976 master-1 kubenswrapper[4740]: I1014 13:08:01.369909 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:01.370043 master-1 kubenswrapper[4740]: I1014 13:08:01.369985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:01.370217 master-1 kubenswrapper[4740]: I1014 13:08:01.370058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:01.370217 master-1 kubenswrapper[4740]: I1014 13:08:01.370128 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:01.370217 master-1 kubenswrapper[4740]: E1014 13:08:01.370151 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:08:01.370217 master-1 kubenswrapper[4740]: I1014 13:08:01.370187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:01.370399 master-1 kubenswrapper[4740]: E1014 13:08:01.370260 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.370210248 +0000 UTC m=+109.180499607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:08:01.370399 master-1 kubenswrapper[4740]: I1014 13:08:01.370347 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:01.370399 master-1 kubenswrapper[4740]: E1014 13:08:01.370360 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:01.370518 master-1 kubenswrapper[4740]: I1014 13:08:01.370454 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:01.370518 master-1 kubenswrapper[4740]: E1014 13:08:01.370475 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:01.370596 master-1 kubenswrapper[4740]: E1014 13:08:01.370524 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.370500004 +0000 UTC m=+109.180789363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:08:01.370596 master-1 kubenswrapper[4740]: E1014 13:08:01.370584 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:08:01.370681 master-1 kubenswrapper[4740]: E1014 13:08:01.370608 4740 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:01.370681 master-1 kubenswrapper[4740]: E1014 13:08:01.370631 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.370612637 +0000 UTC m=+109.180901996 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:08:01.370760 master-1 kubenswrapper[4740]: E1014 13:08:01.370718 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls podName:910af03d-893a-443d-b6ed-fe21c26951f4 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.37067941 +0000 UTC m=+109.180968779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls") pod "dns-operator-7769d9677-nh2qc" (UID: "910af03d-893a-443d-b6ed-fe21c26951f4") : secret "metrics-tls" not found Oct 14 13:08:01.370760 master-1 kubenswrapper[4740]: E1014 13:08:01.370678 4740 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 14 13:08:01.370839 master-1 kubenswrapper[4740]: E1014 13:08:01.370789 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:08:01.370960 master-1 kubenswrapper[4740]: E1014 13:08:01.370752 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.370735751 +0000 UTC m=+109.181025120 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:08:01.371081 master-1 kubenswrapper[4740]: E1014 13:08:01.371027 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls podName:1d68f537-be68-4623-bded-e5d7fb5c3573 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.370954816 +0000 UTC m=+109.181244175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls") pod "machine-approver-7876f99457-kpq7g" (UID: "1d68f537-be68-4623-bded-e5d7fb5c3573") : secret "machine-approver-tls" not found Oct 14 13:08:01.371223 master-1 kubenswrapper[4740]: E1014 13:08:01.371103 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:03.371089119 +0000 UTC m=+109.181378488 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:08:02.361275 master-1 kubenswrapper[4740]: E1014 13:08:02.360890 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642\\\"\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" podUID="ec50d087-259f-45c0-a15a-7fe949ae66dd" Oct 14 13:08:02.361275 master-1 kubenswrapper[4740]: E1014 13:08:02.361008 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34\\\"\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" podUID="24d7cccd-3100-4c4f-9303-fc57993b465e" Oct 14 13:08:02.361275 master-1 kubenswrapper[4740]: E1014 13:08:02.361076 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d\\\"\"" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" podUID="97b0a691-fe82-46b1-9f04-671aed7e10be" Oct 14 13:08:03.293406 master-1 kubenswrapper[4740]: I1014 13:08:03.293313 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293436 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293475 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293497 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293517 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293574 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293595 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293614 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:03.293640 master-1 kubenswrapper[4740]: I1014 13:08:03.293634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:03.293914 master-1 kubenswrapper[4740]: I1014 13:08:03.293671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:03.293914 master-1 kubenswrapper[4740]: E1014 13:08:03.293667 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:03.293914 master-1 kubenswrapper[4740]: E1014 13:08:03.293718 4740 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:03.293914 master-1 kubenswrapper[4740]: E1014 13:08:03.293744 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:03.293914 master-1 kubenswrapper[4740]: I1014 13:08:03.293703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:03.293914 master-1 kubenswrapper[4740]: E1014 13:08:03.293792 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:08:03.294064 master-1 kubenswrapper[4740]: E1014 13:08:03.293788 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:03.294064 master-1 kubenswrapper[4740]: E1014 13:08:03.293728 4740 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:03.294064 master-1 kubenswrapper[4740]: E1014 13:08:03.293777 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:03.294132 master-1 kubenswrapper[4740]: E1014 13:08:03.294032 4740 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 14 13:08:03.294160 master-1 kubenswrapper[4740]: E1014 13:08:03.293823 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:03.294185 master-1 kubenswrapper[4740]: E1014 13:08:03.293718 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 14 13:08:03.294215 master-1 kubenswrapper[4740]: E1014 13:08:03.293843 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:08:03.294274 master-1 kubenswrapper[4740]: E1014 13:08:03.293876 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.29379284 +0000 UTC m=+113.104082169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:03.294274 master-1 kubenswrapper[4740]: E1014 13:08:03.293866 4740 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:03.294341 master-1 kubenswrapper[4740]: E1014 13:08:03.294298 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294266042 +0000 UTC m=+113.104555391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:08:03.294341 master-1 kubenswrapper[4740]: E1014 13:08:03.294328 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls podName:a4ab71e1-9b1f-42ee-8abb-8f998e3cae74 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294315753 +0000 UTC m=+113.104605092 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-n87md" (UID: "a4ab71e1-9b1f-42ee-8abb-8f998e3cae74") : secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:03.294401 master-1 kubenswrapper[4740]: E1014 13:08:03.294345 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294337063 +0000 UTC m=+113.104626402 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:03.294401 master-1 kubenswrapper[4740]: I1014 13:08:03.294389 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:03.294503 master-1 kubenswrapper[4740]: E1014 13:08:03.294463 4740 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 14 13:08:03.294503 master-1 kubenswrapper[4740]: E1014 13:08:03.294488 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294478048 +0000 UTC m=+113.104767397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:08:03.294566 master-1 kubenswrapper[4740]: E1014 13:08:03.294508 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294501028 +0000 UTC m=+113.104790617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:03.294566 master-1 kubenswrapper[4740]: E1014 13:08:03.294534 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls podName:b1a35e1e-333f-480c-b1d6-059475700627 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294516009 +0000 UTC m=+113.104805338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-gspqw" (UID: "b1a35e1e-333f-480c-b1d6-059475700627") : secret "image-registry-operator-tls" not found Oct 14 13:08:03.294566 master-1 kubenswrapper[4740]: E1014 13:08:03.294553 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls podName:398ba6fd-0f8f-46af-b690-61a6eec9176b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294543899 +0000 UTC m=+113.104833228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls") pod "ingress-operator-766ddf4575-xhdjt" (UID: "398ba6fd-0f8f-46af-b690-61a6eec9176b") : secret "metrics-tls" not found Oct 14 13:08:03.294566 master-1 kubenswrapper[4740]: E1014 13:08:03.294566 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.29455923 +0000 UTC m=+113.104848559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:03.294681 master-1 kubenswrapper[4740]: E1014 13:08:03.294581 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls podName:b51ef0bc-8b0e-4fab-b101-660ed408924c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.29457428 +0000 UTC m=+113.104863609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-s66vj" (UID: "b51ef0bc-8b0e-4fab-b101-660ed408924c") : secret "machine-api-operator-tls" not found Oct 14 13:08:03.294681 master-1 kubenswrapper[4740]: E1014 13:08:03.294599 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert podName:ab511c1d-28e3-448a-86ec-cea21871fd26 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.29459313 +0000 UTC m=+113.104882459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert") pod "cluster-autoscaler-operator-7ff449c7c5-nmpfk" (UID: "ab511c1d-28e3-448a-86ec-cea21871fd26") : secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:03.294681 master-1 kubenswrapper[4740]: E1014 13:08:03.294611 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294605281 +0000 UTC m=+113.104894610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "node-tuning-operator-tls" not found Oct 14 13:08:03.294681 master-1 kubenswrapper[4740]: E1014 13:08:03.294626 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert podName:1fa31cdd-e051-4987-a1a2-814fc7445e6b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.294617911 +0000 UTC m=+113.104907230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-4cf2d" (UID: "1fa31cdd-e051-4987-a1a2-814fc7445e6b") : secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:03.395562 master-1 kubenswrapper[4740]: I1014 13:08:03.395491 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:03.395562 master-1 kubenswrapper[4740]: I1014 13:08:03.395558 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: I1014 13:08:03.395617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: I1014 13:08:03.395670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395672 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: I1014 13:08:03.395711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395717 4740 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395747 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.395727107 +0000 UTC m=+113.206016426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395800 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls podName:910af03d-893a-443d-b6ed-fe21c26951f4 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.395775538 +0000 UTC m=+113.206064867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls") pod "dns-operator-7769d9677-nh2qc" (UID: "910af03d-893a-443d-b6ed-fe21c26951f4") : secret "metrics-tls" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: I1014 13:08:03.395851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395870 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395922 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.395904521 +0000 UTC m=+113.206193870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395939 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: I1014 13:08:03.395955 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395961 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.395955183 +0000 UTC m=+113.206244512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.395996 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.396014 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.396009474 +0000 UTC m=+113.206298803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:08:03.396059 master-1 kubenswrapper[4740]: E1014 13:08:03.396068 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:08:03.396478 master-1 kubenswrapper[4740]: E1014 13:08:03.396095 4740 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 14 13:08:03.396478 master-1 kubenswrapper[4740]: E1014 13:08:03.396098 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.396088936 +0000 UTC m=+113.206378275 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:08:03.396478 master-1 kubenswrapper[4740]: E1014 13:08:03.396129 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls podName:1d68f537-be68-4623-bded-e5d7fb5c3573 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:07.396123987 +0000 UTC m=+113.206413306 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls") pod "machine-approver-7876f99457-kpq7g" (UID: "1d68f537-be68-4623-bded-e5d7fb5c3573") : secret "machine-approver-tls" not found Oct 14 13:08:07.345460 master-1 kubenswrapper[4740]: I1014 13:08:07.345381 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:07.345460 master-1 kubenswrapper[4740]: I1014 13:08:07.345461 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: I1014 13:08:07.345542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.345647 4740 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: I1014 13:08:07.345688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.345698 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.345857 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls podName:b51ef0bc-8b0e-4fab-b101-660ed408924c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.345826853 +0000 UTC m=+121.156116212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-s66vj" (UID: "b51ef0bc-8b0e-4fab-b101-660ed408924c") : secret "machine-api-operator-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: I1014 13:08:07.345814 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.345936 4740 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.345940 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.345797 4740 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.346023 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: I1014 13:08:07.345956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.346015 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.345951096 +0000 UTC m=+121.156240485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.346110 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346073849 +0000 UTC m=+121.156363278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.346148 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls podName:a4ab71e1-9b1f-42ee-8abb-8f998e3cae74 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.34612777 +0000 UTC m=+121.156417259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-n87md" (UID: "a4ab71e1-9b1f-42ee-8abb-8f998e3cae74") : secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:07.346435 master-1 kubenswrapper[4740]: E1014 13:08:07.346179 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls podName:b1a35e1e-333f-480c-b1d6-059475700627 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346162481 +0000 UTC m=+121.156451980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-gspqw" (UID: "b1a35e1e-333f-480c-b1d6-059475700627") : secret "image-registry-operator-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346259 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346210422 +0000 UTC m=+121.156499801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: I1014 13:08:07.346362 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: I1014 13:08:07.346439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346470 4740 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: I1014 13:08:07.346500 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346529 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls podName:398ba6fd-0f8f-46af-b690-61a6eec9176b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.3465087 +0000 UTC m=+121.156798059 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls") pod "ingress-operator-766ddf4575-xhdjt" (UID: "398ba6fd-0f8f-46af-b690-61a6eec9176b") : secret "metrics-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: I1014 13:08:07.346562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346606 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: I1014 13:08:07.346617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346663 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346647524 +0000 UTC m=+121.156936883 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "node-tuning-operator-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346608 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: I1014 13:08:07.346699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346682 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346747 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346763 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346740576 +0000 UTC m=+121.157029935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:07.347412 master-1 kubenswrapper[4740]: E1014 13:08:07.346816 4740 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:07.348479 master-1 kubenswrapper[4740]: I1014 13:08:07.346851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:07.348479 master-1 kubenswrapper[4740]: E1014 13:08:07.346865 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert podName:1fa31cdd-e051-4987-a1a2-814fc7445e6b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346850609 +0000 UTC m=+121.157139968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-4cf2d" (UID: "1fa31cdd-e051-4987-a1a2-814fc7445e6b") : secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:07.348479 master-1 kubenswrapper[4740]: E1014 13:08:07.346938 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert podName:ab511c1d-28e3-448a-86ec-cea21871fd26 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.34691595 +0000 UTC m=+121.157205359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert") pod "cluster-autoscaler-operator-7ff449c7c5-nmpfk" (UID: "ab511c1d-28e3-448a-86ec-cea21871fd26") : secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:07.348479 master-1 kubenswrapper[4740]: E1014 13:08:07.346968 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.346952401 +0000 UTC m=+121.157241900 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:07.348479 master-1 kubenswrapper[4740]: E1014 13:08:07.347020 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:08:07.348479 master-1 kubenswrapper[4740]: E1014 13:08:07.347089 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.347067394 +0000 UTC m=+121.157356823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:08:07.448478 master-1 kubenswrapper[4740]: I1014 13:08:07.448417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:07.448634 master-1 kubenswrapper[4740]: I1014 13:08:07.448514 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:07.448634 master-1 kubenswrapper[4740]: I1014 13:08:07.448581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:07.448770 master-1 kubenswrapper[4740]: I1014 13:08:07.448732 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:07.448770 master-1 kubenswrapper[4740]: E1014 13:08:07.448754 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:08:07.448868 master-1 kubenswrapper[4740]: I1014 13:08:07.448792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:07.448868 master-1 kubenswrapper[4740]: E1014 13:08:07.448752 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:07.448868 master-1 kubenswrapper[4740]: E1014 13:08:07.448841 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:08:07.448974 master-1 kubenswrapper[4740]: E1014 13:08:07.448836 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.448811826 +0000 UTC m=+121.259101205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:08:07.448974 master-1 kubenswrapper[4740]: I1014 13:08:07.448931 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:07.448974 master-1 kubenswrapper[4740]: E1014 13:08:07.448945 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:08:07.448974 master-1 kubenswrapper[4740]: E1014 13:08:07.448959 4740 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:07.449094 master-1 kubenswrapper[4740]: E1014 13:08:07.448978 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.448921979 +0000 UTC m=+121.259211358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:08:07.449094 master-1 kubenswrapper[4740]: E1014 13:08:07.448995 4740 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 14 13:08:07.449094 master-1 kubenswrapper[4740]: E1014 13:08:07.449036 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls podName:910af03d-893a-443d-b6ed-fe21c26951f4 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.449009951 +0000 UTC m=+121.259299350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls") pod "dns-operator-7769d9677-nh2qc" (UID: "910af03d-893a-443d-b6ed-fe21c26951f4") : secret "metrics-tls" not found Oct 14 13:08:07.449094 master-1 kubenswrapper[4740]: E1014 13:08:07.449072 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.449055432 +0000 UTC m=+121.259344871 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:08:07.449259 master-1 kubenswrapper[4740]: I1014 13:08:07.449162 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:07.449259 master-1 kubenswrapper[4740]: E1014 13:08:07.449199 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.449183235 +0000 UTC m=+121.259472664 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:08:07.449259 master-1 kubenswrapper[4740]: E1014 13:08:07.449219 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls podName:1d68f537-be68-4623-bded-e5d7fb5c3573 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.449211715 +0000 UTC m=+121.259501204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls") pod "machine-approver-7876f99457-kpq7g" (UID: "1d68f537-be68-4623-bded-e5d7fb5c3573") : secret "machine-approver-tls" not found Oct 14 13:08:07.449380 master-1 kubenswrapper[4740]: E1014 13:08:07.449326 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:07.449419 master-1 kubenswrapper[4740]: E1014 13:08:07.449389 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.449374919 +0000 UTC m=+121.259664288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:08:08.248792 master-1 kubenswrapper[4740]: I1014 13:08:08.248710 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:08:08.249735 master-1 kubenswrapper[4740]: I1014 13:08:08.249696 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:08:08.266691 master-1 kubenswrapper[4740]: I1014 13:08:08.266636 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvfnh" Oct 14 13:08:10.384623 master-1 kubenswrapper[4740]: I1014 13:08:10.383947 4740 generic.go:334] "Generic (PLEG): container finished" podID="f22c13e5-9b56-4f0c-a17a-677ba07226ff" containerID="af1676f923742d44a12d7249df33b170922a72a777df1ffed222882ff947d984" exitCode=0 Oct 14 13:08:10.384623 master-1 kubenswrapper[4740]: I1014 13:08:10.384062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" event={"ID":"f22c13e5-9b56-4f0c-a17a-677ba07226ff","Type":"ContainerDied","Data":"af1676f923742d44a12d7249df33b170922a72a777df1ffed222882ff947d984"} Oct 14 13:08:10.391706 master-1 kubenswrapper[4740]: I1014 13:08:10.391652 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" event={"ID":"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8","Type":"ContainerStarted","Data":"9b74c929145b31438f3513ba5ba67f7ee6219461626ba8690455042fa87245dd"} Oct 14 13:08:10.397945 master-1 kubenswrapper[4740]: I1014 13:08:10.397897 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" event={"ID":"3a952fbc-3908-4e41-a914-9f63f47252e4","Type":"ContainerStarted","Data":"6de25fc526ffef8f6555e86be736168c5607f69c1a5e7ea4f358240ec12270b9"} Oct 14 13:08:10.398263 master-1 kubenswrapper[4740]: I1014 13:08:10.398236 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/master-1-debug-qq2pg"] Oct 14 13:08:10.398697 master-1 kubenswrapper[4740]: I1014 13:08:10.398672 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.401114 master-1 kubenswrapper[4740]: I1014 13:08:10.401078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" event={"ID":"2a2b886b-005d-4d02-a231-ddacf42775ea","Type":"ContainerStarted","Data":"3aec0d5b414dd5378b2837a6c0774b59f0068ddf7ac248756ee9c342ee243ba0"} Oct 14 13:08:10.402643 master-1 kubenswrapper[4740]: I1014 13:08:10.402605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" event={"ID":"2fa5c762-a739-4cf4-929c-573bc5494b81","Type":"ContainerStarted","Data":"008d1108c66a56e8ed16a8017d28e4157ac29ff463d22610838bd2fe665ea8cb"} Oct 14 13:08:10.406244 master-1 kubenswrapper[4740]: I1014 13:08:10.403987 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mzrkb" event={"ID":"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67","Type":"ContainerStarted","Data":"bc71a15d544001ec0327f2a718240b52ed1d0e11b63a81eddc56b5f9b5a7dd37"} Oct 14 13:08:10.406244 master-1 kubenswrapper[4740]: I1014 13:08:10.405661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" event={"ID":"db9c19df-41e6-4216-829f-dd2975ff5108","Type":"ContainerStarted","Data":"d9efe379cee8856d872e5c15d2b2f140b21db10825aba85c3c2bc0ede24360d0"} Oct 14 13:08:10.410246 master-1 kubenswrapper[4740]: I1014 13:08:10.407489 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8b5ead9-7212-4a2f-8105-92d1c5384308" containerID="49a4d4027c5738d82111f9adb45502ef346b98beef84cb0c19bcd3691757127a" exitCode=0 Oct 14 13:08:10.410246 master-1 kubenswrapper[4740]: I1014 13:08:10.407542 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" event={"ID":"f8b5ead9-7212-4a2f-8105-92d1c5384308","Type":"ContainerDied","Data":"49a4d4027c5738d82111f9adb45502ef346b98beef84cb0c19bcd3691757127a"} Oct 14 13:08:10.414842 master-1 kubenswrapper[4740]: I1014 13:08:10.414782 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" event={"ID":"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f","Type":"ContainerStarted","Data":"f3c650e199f45169804566211177e6d38ecf868a5d13c0b7308282dd019819c8"} Oct 14 13:08:10.419396 master-1 kubenswrapper[4740]: I1014 13:08:10.419348 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" event={"ID":"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c","Type":"ContainerStarted","Data":"bf226709720cf81f2831e8db38bbdb169963c5afa56830a861340544329055d9"} Oct 14 13:08:10.423243 master-1 kubenswrapper[4740]: I1014 13:08:10.420876 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" event={"ID":"016573fd-7804-461e-83d7-1c019298f7c6","Type":"ContainerStarted","Data":"3af935dd187506e59446be2281bb2432ac402c0a0b1380df146e365a3addeab2"} Oct 14 13:08:10.423243 master-1 kubenswrapper[4740]: I1014 13:08:10.422743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" event={"ID":"772f8774-25f4-4987-bd40-8f3adda97e8b","Type":"ContainerStarted","Data":"f5832d56e3fcd22df22a6eedf838f45d8d3192cad36fc782deb89ade5a630fbb"} Oct 14 13:08:10.427241 master-1 kubenswrapper[4740]: I1014 13:08:10.424081 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" podStartSLOduration=68.610013108 podStartE2EDuration="1m17.424070003s" podCreationTimestamp="2025-10-14 13:06:53 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.807390238 +0000 UTC m=+106.617679567" lastFinishedPulling="2025-10-14 13:08:09.621447123 +0000 UTC m=+115.431736462" observedRunningTime="2025-10-14 13:08:10.421960361 +0000 UTC m=+116.232249690" watchObservedRunningTime="2025-10-14 13:08:10.424070003 +0000 UTC m=+116.234359332" Oct 14 13:08:10.442336 master-1 kubenswrapper[4740]: I1014 13:08:10.441814 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" podStartSLOduration=68.804775149 podStartE2EDuration="1m17.441791144s" podCreationTimestamp="2025-10-14 13:06:53 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.990184978 +0000 UTC m=+106.800474297" lastFinishedPulling="2025-10-14 13:08:09.627200943 +0000 UTC m=+115.437490292" observedRunningTime="2025-10-14 13:08:10.440065182 +0000 UTC m=+116.250354511" watchObservedRunningTime="2025-10-14 13:08:10.441791144 +0000 UTC m=+116.252080463" Oct 14 13:08:10.469930 master-1 kubenswrapper[4740]: I1014 13:08:10.469030 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" podStartSLOduration=70.748876866 podStartE2EDuration="1m19.469008789s" podCreationTimestamp="2025-10-14 13:06:51 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.910775711 +0000 UTC m=+106.721065040" lastFinishedPulling="2025-10-14 13:08:09.630907614 +0000 UTC m=+115.441196963" observedRunningTime="2025-10-14 13:08:10.467871661 +0000 UTC m=+116.278161010" watchObservedRunningTime="2025-10-14 13:08:10.469008789 +0000 UTC m=+116.279298118" Oct 14 13:08:10.485256 master-1 kubenswrapper[4740]: I1014 13:08:10.484733 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" podStartSLOduration=91.541364804 podStartE2EDuration="1m40.484719732s" podCreationTimestamp="2025-10-14 13:06:30 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.678125426 +0000 UTC m=+106.488414755" lastFinishedPulling="2025-10-14 13:08:09.621480324 +0000 UTC m=+115.431769683" observedRunningTime="2025-10-14 13:08:10.483352408 +0000 UTC m=+116.293641737" watchObservedRunningTime="2025-10-14 13:08:10.484719732 +0000 UTC m=+116.295009061" Oct 14 13:08:10.494576 master-1 kubenswrapper[4740]: I1014 13:08:10.494407 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f24e44b3-ee1b-4452-ace9-9da83358c982-host\") pod \"master-1-debug-qq2pg\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.494576 master-1 kubenswrapper[4740]: I1014 13:08:10.494514 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgfm4\" (UniqueName: \"kubernetes.io/projected/f24e44b3-ee1b-4452-ace9-9da83358c982-kube-api-access-sgfm4\") pod \"master-1-debug-qq2pg\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.506451 master-1 kubenswrapper[4740]: I1014 13:08:10.506407 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="assisted-installer/assisted-installer-controller-mzrkb" podStartSLOduration=197.795466002 podStartE2EDuration="3m26.50639669s" podCreationTimestamp="2025-10-14 13:04:44 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.911457328 +0000 UTC m=+106.721746657" lastFinishedPulling="2025-10-14 13:08:09.622387996 +0000 UTC m=+115.432677345" observedRunningTime="2025-10-14 13:08:10.505907538 +0000 UTC m=+116.316196867" watchObservedRunningTime="2025-10-14 13:08:10.50639669 +0000 UTC m=+116.316686019" Oct 14 13:08:10.520254 master-1 kubenswrapper[4740]: I1014 13:08:10.520158 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" podStartSLOduration=93.064923233 podStartE2EDuration="1m41.520143425s" podCreationTimestamp="2025-10-14 13:06:29 +0000 UTC" firstStartedPulling="2025-10-14 13:08:01.176521993 +0000 UTC m=+106.986811362" lastFinishedPulling="2025-10-14 13:08:09.631742215 +0000 UTC m=+115.442031554" observedRunningTime="2025-10-14 13:08:10.517493331 +0000 UTC m=+116.327782660" watchObservedRunningTime="2025-10-14 13:08:10.520143425 +0000 UTC m=+116.330432754" Oct 14 13:08:10.535853 master-1 kubenswrapper[4740]: I1014 13:08:10.535596 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" podStartSLOduration=68.647817919 podStartE2EDuration="1m17.535569132s" podCreationTimestamp="2025-10-14 13:06:53 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.738881747 +0000 UTC m=+106.549171096" lastFinishedPulling="2025-10-14 13:08:09.62663298 +0000 UTC m=+115.436922309" observedRunningTime="2025-10-14 13:08:10.53383271 +0000 UTC m=+116.344122079" watchObservedRunningTime="2025-10-14 13:08:10.535569132 +0000 UTC m=+116.345858461" Oct 14 13:08:10.554013 master-1 kubenswrapper[4740]: I1014 13:08:10.553967 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" podStartSLOduration=69.717654302 podStartE2EDuration="1m18.55395257s" podCreationTimestamp="2025-10-14 13:06:52 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.88331262 +0000 UTC m=+106.693601959" lastFinishedPulling="2025-10-14 13:08:09.719610898 +0000 UTC m=+115.529900227" observedRunningTime="2025-10-14 13:08:10.551900681 +0000 UTC m=+116.362190010" watchObservedRunningTime="2025-10-14 13:08:10.55395257 +0000 UTC m=+116.364241899" Oct 14 13:08:10.567056 master-1 kubenswrapper[4740]: I1014 13:08:10.566076 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf" podStartSLOduration=98.718256378 podStartE2EDuration="1m47.566060066s" podCreationTimestamp="2025-10-14 13:06:23 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.773692116 +0000 UTC m=+106.583981455" lastFinishedPulling="2025-10-14 13:08:09.621495804 +0000 UTC m=+115.431785143" observedRunningTime="2025-10-14 13:08:10.56501337 +0000 UTC m=+116.375302709" watchObservedRunningTime="2025-10-14 13:08:10.566060066 +0000 UTC m=+116.376349405" Oct 14 13:08:10.596248 master-1 kubenswrapper[4740]: I1014 13:08:10.595310 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgfm4\" (UniqueName: \"kubernetes.io/projected/f24e44b3-ee1b-4452-ace9-9da83358c982-kube-api-access-sgfm4\") pod \"master-1-debug-qq2pg\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.596248 master-1 kubenswrapper[4740]: I1014 13:08:10.595584 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f24e44b3-ee1b-4452-ace9-9da83358c982-host\") pod \"master-1-debug-qq2pg\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.596248 master-1 kubenswrapper[4740]: I1014 13:08:10.595672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f24e44b3-ee1b-4452-ace9-9da83358c982-host\") pod \"master-1-debug-qq2pg\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.602784 master-1 kubenswrapper[4740]: I1014 13:08:10.599329 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" podStartSLOduration=96.554036232 podStartE2EDuration="1m45.599308657s" podCreationTimestamp="2025-10-14 13:06:25 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.576157098 +0000 UTC m=+106.386446427" lastFinishedPulling="2025-10-14 13:08:09.621429503 +0000 UTC m=+115.431718852" observedRunningTime="2025-10-14 13:08:10.59696074 +0000 UTC m=+116.407250069" watchObservedRunningTime="2025-10-14 13:08:10.599308657 +0000 UTC m=+116.409597996" Oct 14 13:08:10.625545 master-1 kubenswrapper[4740]: I1014 13:08:10.625498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgfm4\" (UniqueName: \"kubernetes.io/projected/f24e44b3-ee1b-4452-ace9-9da83358c982-kube-api-access-sgfm4\") pod \"master-1-debug-qq2pg\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.721712 master-1 kubenswrapper[4740]: I1014 13:08:10.721553 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:10.740151 master-1 kubenswrapper[4740]: W1014 13:08:10.738349 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf24e44b3_ee1b_4452_ace9_9da83358c982.slice/crio-5c5dd21320948ef0f69cd2fbad626901ec18f68ae9e72d18c91b18fc22da8338 WatchSource:0}: Error finding container 5c5dd21320948ef0f69cd2fbad626901ec18f68ae9e72d18c91b18fc22da8338: Status 404 returned error can't find the container with id 5c5dd21320948ef0f69cd2fbad626901ec18f68ae9e72d18c91b18fc22da8338 Oct 14 13:08:11.119197 master-1 kubenswrapper[4740]: I1014 13:08:11.118817 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt"] Oct 14 13:08:11.120389 master-1 kubenswrapper[4740]: I1014 13:08:11.120350 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" Oct 14 13:08:11.127085 master-1 kubenswrapper[4740]: I1014 13:08:11.126988 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt"] Oct 14 13:08:11.205517 master-1 kubenswrapper[4740]: I1014 13:08:11.205447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvzg6\" (UniqueName: \"kubernetes.io/projected/534fcd65-38f8-4d39-b4de-d7b2819318c7-kube-api-access-hvzg6\") pod \"csi-snapshot-controller-ddd7d64cd-5s4kt\" (UID: \"534fcd65-38f8-4d39-b4de-d7b2819318c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" Oct 14 13:08:11.307574 master-1 kubenswrapper[4740]: I1014 13:08:11.307507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvzg6\" (UniqueName: \"kubernetes.io/projected/534fcd65-38f8-4d39-b4de-d7b2819318c7-kube-api-access-hvzg6\") pod \"csi-snapshot-controller-ddd7d64cd-5s4kt\" (UID: \"534fcd65-38f8-4d39-b4de-d7b2819318c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" Oct 14 13:08:11.340305 master-1 kubenswrapper[4740]: I1014 13:08:11.340198 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvzg6\" (UniqueName: \"kubernetes.io/projected/534fcd65-38f8-4d39-b4de-d7b2819318c7-kube-api-access-hvzg6\") pod \"csi-snapshot-controller-ddd7d64cd-5s4kt\" (UID: \"534fcd65-38f8-4d39-b4de-d7b2819318c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" Oct 14 13:08:11.433580 master-1 kubenswrapper[4740]: I1014 13:08:11.433407 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-1-debug-qq2pg" event={"ID":"f24e44b3-ee1b-4452-ace9-9da83358c982","Type":"ContainerStarted","Data":"5c5dd21320948ef0f69cd2fbad626901ec18f68ae9e72d18c91b18fc22da8338"} Oct 14 13:08:11.441919 master-1 kubenswrapper[4740]: I1014 13:08:11.441862 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" Oct 14 13:08:11.599536 master-1 kubenswrapper[4740]: I1014 13:08:11.599478 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt"] Oct 14 13:08:12.121661 master-1 kubenswrapper[4740]: I1014 13:08:12.121588 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-x2cz2"] Oct 14 13:08:12.122064 master-1 kubenswrapper[4740]: I1014 13:08:12.122028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.151088 master-1 kubenswrapper[4740]: I1014 13:08:12.151033 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 13:08:12.151088 master-1 kubenswrapper[4740]: I1014 13:08:12.151055 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 13:08:12.151088 master-1 kubenswrapper[4740]: I1014 13:08:12.151086 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.151200 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.151495 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.151589 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.151946 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.152419 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.152441 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.152579 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.154832 master-1 kubenswrapper[4740]: I1014 13:08:12.152700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2jpf\" (UniqueName: \"kubernetes.io/projected/d937f4ea-9e12-44a6-8fcf-b380421d36ae-kube-api-access-j2jpf\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.157628 master-1 kubenswrapper[4740]: I1014 13:08:12.157584 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-x2cz2"] Oct 14 13:08:12.253525 master-1 kubenswrapper[4740]: I1014 13:08:12.253287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.253525 master-1 kubenswrapper[4740]: I1014 13:08:12.253524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.253776 master-1 kubenswrapper[4740]: I1014 13:08:12.253554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.253776 master-1 kubenswrapper[4740]: E1014 13:08:12.253676 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:12.253776 master-1 kubenswrapper[4740]: I1014 13:08:12.253759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2jpf\" (UniqueName: \"kubernetes.io/projected/d937f4ea-9e12-44a6-8fcf-b380421d36ae-kube-api-access-j2jpf\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.253882 master-1 kubenswrapper[4740]: I1014 13:08:12.253801 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.253882 master-1 kubenswrapper[4740]: E1014 13:08:12.253817 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:12.253882 master-1 kubenswrapper[4740]: E1014 13:08:12.253869 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:12.753803635 +0000 UTC m=+118.564093164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : secret "serving-cert" not found Oct 14 13:08:12.253882 master-1 kubenswrapper[4740]: E1014 13:08:12.253881 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 14 13:08:12.254131 master-1 kubenswrapper[4740]: E1014 13:08:12.254081 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:12.754069442 +0000 UTC m=+118.564358981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "client-ca" not found Oct 14 13:08:12.254131 master-1 kubenswrapper[4740]: E1014 13:08:12.254115 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:12.754106343 +0000 UTC m=+118.564395672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "openshift-global-ca" not found Oct 14 13:08:12.254131 master-1 kubenswrapper[4740]: E1014 13:08:12.253709 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 14 13:08:12.254303 master-1 kubenswrapper[4740]: E1014 13:08:12.254157 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:12.754149694 +0000 UTC m=+118.564439243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "config" not found Oct 14 13:08:12.289609 master-1 kubenswrapper[4740]: I1014 13:08:12.281405 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2jpf\" (UniqueName: \"kubernetes.io/projected/d937f4ea-9e12-44a6-8fcf-b380421d36ae-kube-api-access-j2jpf\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.438699 master-1 kubenswrapper[4740]: I1014 13:08:12.438548 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" event={"ID":"534fcd65-38f8-4d39-b4de-d7b2819318c7","Type":"ContainerStarted","Data":"6ae701ac639a5537554f58a832df6a6bed0704512e36b0d8ddc6656361aba797"} Oct 14 13:08:12.760616 master-1 kubenswrapper[4740]: I1014 13:08:12.760551 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.760848 master-1 kubenswrapper[4740]: I1014 13:08:12.760630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.760848 master-1 kubenswrapper[4740]: I1014 13:08:12.760683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.760848 master-1 kubenswrapper[4740]: E1014 13:08:12.760770 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:12.760848 master-1 kubenswrapper[4740]: E1014 13:08:12.760813 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Oct 14 13:08:12.761045 master-1 kubenswrapper[4740]: I1014 13:08:12.760854 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:12.761045 master-1 kubenswrapper[4740]: E1014 13:08:12.760907 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:12.761045 master-1 kubenswrapper[4740]: E1014 13:08:12.760905 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.760875405 +0000 UTC m=+119.571164734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : secret "serving-cert" not found Oct 14 13:08:12.761045 master-1 kubenswrapper[4740]: E1014 13:08:12.760960 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.760938936 +0000 UTC m=+119.571228265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "config" not found Oct 14 13:08:12.761045 master-1 kubenswrapper[4740]: E1014 13:08:12.760974 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Oct 14 13:08:12.761045 master-1 kubenswrapper[4740]: E1014 13:08:12.761046 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.761022898 +0000 UTC m=+119.571312407 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "openshift-global-ca" not found Oct 14 13:08:12.762832 master-1 kubenswrapper[4740]: E1014 13:08:12.761174 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.761165322 +0000 UTC m=+119.571454651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "client-ca" not found Oct 14 13:08:13.015894 master-1 kubenswrapper[4740]: I1014 13:08:13.015690 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5"] Oct 14 13:08:13.016195 master-1 kubenswrapper[4740]: I1014 13:08:13.016168 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.041969 master-1 kubenswrapper[4740]: I1014 13:08:13.041912 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 13:08:13.042094 master-1 kubenswrapper[4740]: I1014 13:08:13.042056 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 13:08:13.042183 master-1 kubenswrapper[4740]: I1014 13:08:13.042120 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 13:08:13.042321 master-1 kubenswrapper[4740]: I1014 13:08:13.042293 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 13:08:13.044486 master-1 kubenswrapper[4740]: I1014 13:08:13.042935 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 13:08:13.052282 master-1 kubenswrapper[4740]: I1014 13:08:13.044775 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5"] Oct 14 13:08:13.067154 master-1 kubenswrapper[4740]: I1014 13:08:13.066849 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-config\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.067154 master-1 kubenswrapper[4740]: I1014 13:08:13.066967 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4c8\" (UniqueName: \"kubernetes.io/projected/0a959dc9-9b10-4cb5-b750-bedfa6fff093-kube-api-access-6m4c8\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.067422 master-1 kubenswrapper[4740]: I1014 13:08:13.067111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.067526 master-1 kubenswrapper[4740]: I1014 13:08:13.067426 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.169617 master-1 kubenswrapper[4740]: I1014 13:08:13.169528 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-config\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.169910 master-1 kubenswrapper[4740]: I1014 13:08:13.169633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4c8\" (UniqueName: \"kubernetes.io/projected/0a959dc9-9b10-4cb5-b750-bedfa6fff093-kube-api-access-6m4c8\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.169910 master-1 kubenswrapper[4740]: I1014 13:08:13.169754 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.169910 master-1 kubenswrapper[4740]: I1014 13:08:13.169805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.170055 master-1 kubenswrapper[4740]: E1014 13:08:13.169948 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:13.170055 master-1 kubenswrapper[4740]: E1014 13:08:13.170024 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.670004104 +0000 UTC m=+119.480293433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:13.170376 master-1 kubenswrapper[4740]: E1014 13:08:13.170323 4740 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:13.170448 master-1 kubenswrapper[4740]: E1014 13:08:13.170424 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:13.670396664 +0000 UTC m=+119.480686023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : secret "serving-cert" not found Oct 14 13:08:13.171191 master-1 kubenswrapper[4740]: I1014 13:08:13.171132 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-config\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.205876 master-1 kubenswrapper[4740]: I1014 13:08:13.202791 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4c8\" (UniqueName: \"kubernetes.io/projected/0a959dc9-9b10-4cb5-b750-bedfa6fff093-kube-api-access-6m4c8\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.460974 master-1 kubenswrapper[4740]: I1014 13:08:13.460873 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-m6qfh" event={"ID":"d25ed7db-e690-44d5-a1a4-ed29b8efeed1","Type":"ContainerStarted","Data":"ec89c7cfbdebbae782f5e2a16e389dd0efaaec699f7c08dd2fbc512fcc8b8a60"} Oct 14 13:08:13.463908 master-1 kubenswrapper[4740]: I1014 13:08:13.463849 4740 generic.go:334] "Generic (PLEG): container finished" podID="f22c13e5-9b56-4f0c-a17a-677ba07226ff" containerID="cf3499e51b8f25927756937767daf9ab63aec9e6256e3e54009a5ab75f9f2958" exitCode=0 Oct 14 13:08:13.463966 master-1 kubenswrapper[4740]: I1014 13:08:13.463945 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" event={"ID":"f22c13e5-9b56-4f0c-a17a-677ba07226ff","Type":"ContainerDied","Data":"cf3499e51b8f25927756937767daf9ab63aec9e6256e3e54009a5ab75f9f2958"} Oct 14 13:08:13.468118 master-1 kubenswrapper[4740]: I1014 13:08:13.468081 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" event={"ID":"f8b5ead9-7212-4a2f-8105-92d1c5384308","Type":"ContainerStarted","Data":"9301a402ed957f29e7bf36af46091070e2b25bc30c6da656535e4d6b92ed2fe1"} Oct 14 13:08:13.468871 master-1 kubenswrapper[4740]: I1014 13:08:13.468842 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:08:13.477697 master-1 kubenswrapper[4740]: I1014 13:08:13.477547 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-m6qfh" podStartSLOduration=5.698064584 podStartE2EDuration="14.477518385s" podCreationTimestamp="2025-10-14 13:07:59 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.846583735 +0000 UTC m=+106.656873064" lastFinishedPulling="2025-10-14 13:08:09.626037536 +0000 UTC m=+115.436326865" observedRunningTime="2025-10-14 13:08:13.476301036 +0000 UTC m=+119.286590365" watchObservedRunningTime="2025-10-14 13:08:13.477518385 +0000 UTC m=+119.287807724" Oct 14 13:08:13.503703 master-1 kubenswrapper[4740]: I1014 13:08:13.502457 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" podStartSLOduration=92.901425732 podStartE2EDuration="1m45.502439733s" podCreationTimestamp="2025-10-14 13:06:28 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.597177681 +0000 UTC m=+106.407467010" lastFinishedPulling="2025-10-14 13:08:13.198191652 +0000 UTC m=+119.008481011" observedRunningTime="2025-10-14 13:08:13.501119741 +0000 UTC m=+119.311409080" watchObservedRunningTime="2025-10-14 13:08:13.502439733 +0000 UTC m=+119.312729062" Oct 14 13:08:13.679883 master-1 kubenswrapper[4740]: I1014 13:08:13.679757 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.679883 master-1 kubenswrapper[4740]: I1014 13:08:13.679837 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:13.680154 master-1 kubenswrapper[4740]: E1014 13:08:13.680043 4740 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:13.680154 master-1 kubenswrapper[4740]: E1014 13:08:13.680121 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:13.680256 master-1 kubenswrapper[4740]: E1014 13:08:13.680172 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:14.680157398 +0000 UTC m=+120.490446727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:13.681470 master-1 kubenswrapper[4740]: E1014 13:08:13.681428 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:14.681387288 +0000 UTC m=+120.491676707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : secret "serving-cert" not found Oct 14 13:08:13.781369 master-1 kubenswrapper[4740]: I1014 13:08:13.781301 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:08:13.781861 master-1 kubenswrapper[4740]: I1014 13:08:13.781402 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:13.781861 master-1 kubenswrapper[4740]: I1014 13:08:13.781567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:13.781861 master-1 kubenswrapper[4740]: I1014 13:08:13.781589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:13.781861 master-1 kubenswrapper[4740]: I1014 13:08:13.781614 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:13.781861 master-1 kubenswrapper[4740]: E1014 13:08:13.781705 4740 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Oct 14 13:08:13.782068 master-1 kubenswrapper[4740]: E1014 13:08:13.781852 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:13.782068 master-1 kubenswrapper[4740]: E1014 13:08:13.781760 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:13.782147 master-1 kubenswrapper[4740]: E1014 13:08:13.781873 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs podName:1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:17.781842249 +0000 UTC m=+183.592131578 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs") pod "network-metrics-daemon-8l654" (UID: "1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1") : secret "metrics-daemon-secret" not found Oct 14 13:08:13.782147 master-1 kubenswrapper[4740]: E1014 13:08:13.782102 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.782090765 +0000 UTC m=+121.592380094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : secret "serving-cert" not found Oct 14 13:08:13.782147 master-1 kubenswrapper[4740]: E1014 13:08:13.782118 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:15.782111335 +0000 UTC m=+121.592400664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "client-ca" not found Oct 14 13:08:13.783295 master-1 kubenswrapper[4740]: I1014 13:08:13.783223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:13.783597 master-1 kubenswrapper[4740]: I1014 13:08:13.783566 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:14.580212 master-1 kubenswrapper[4740]: I1014 13:08:14.580099 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-x2cz2"] Oct 14 13:08:14.581216 master-1 kubenswrapper[4740]: E1014 13:08:14.580455 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" podUID="d937f4ea-9e12-44a6-8fcf-b380421d36ae" Oct 14 13:08:14.695531 master-1 kubenswrapper[4740]: I1014 13:08:14.695445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:14.695804 master-1 kubenswrapper[4740]: I1014 13:08:14.695593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:14.695804 master-1 kubenswrapper[4740]: E1014 13:08:14.695743 4740 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:14.695935 master-1 kubenswrapper[4740]: E1014 13:08:14.695854 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:14.695935 master-1 kubenswrapper[4740]: E1014 13:08:14.695906 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:16.695885106 +0000 UTC m=+122.506174435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:14.696194 master-1 kubenswrapper[4740]: E1014 13:08:14.696052 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:16.69604445 +0000 UTC m=+122.506333779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : secret "serving-cert" not found Oct 14 13:08:15.407498 master-1 kubenswrapper[4740]: I1014 13:08:15.405153 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:15.407828 master-1 kubenswrapper[4740]: E1014 13:08:15.405712 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:15.407828 master-1 kubenswrapper[4740]: E1014 13:08:15.407722 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.407680619 +0000 UTC m=+137.217969978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "performance-addon-operator-webhook-cert" not found Oct 14 13:08:15.407828 master-1 kubenswrapper[4740]: I1014 13:08:15.407592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:15.407828 master-1 kubenswrapper[4740]: E1014 13:08:15.407750 4740 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: E1014 13:08:15.407847 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls podName:398ba6fd-0f8f-46af-b690-61a6eec9176b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.407821802 +0000 UTC m=+137.218111131 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls") pod "ingress-operator-766ddf4575-xhdjt" (UID: "398ba6fd-0f8f-46af-b690-61a6eec9176b") : secret "metrics-tls" not found Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: I1014 13:08:15.407873 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: I1014 13:08:15.407914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: I1014 13:08:15.407940 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: I1014 13:08:15.407969 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: I1014 13:08:15.407997 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: I1014 13:08:15.408021 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: E1014 13:08:15.408031 4740 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Oct 14 13:08:15.408094 master-1 kubenswrapper[4740]: E1014 13:08:15.408099 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls podName:655ad1ce-582a-4728-8bfd-ca4164468de3 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408076938 +0000 UTC m=+137.218366297 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls") pod "cluster-node-tuning-operator-7866c9bdf4-d4dlj" (UID: "655ad1ce-582a-4728-8bfd-ca4164468de3") : secret "node-tuning-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408111 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408159 4740 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408204 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.40818147 +0000 UTC m=+137.218470799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: I1014 13:08:15.408200 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408237 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert podName:1fa31cdd-e051-4987-a1a2-814fc7445e6b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408215551 +0000 UTC m=+137.218504870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-5cf49b6487-4cf2d" (UID: "1fa31cdd-e051-4987-a1a2-814fc7445e6b") : secret "cloud-credential-operator-serving-cert" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408243 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408280 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert podName:ab511c1d-28e3-448a-86ec-cea21871fd26 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408270873 +0000 UTC m=+137.218560442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert") pod "cluster-autoscaler-operator-7ff449c7c5-nmpfk" (UID: "ab511c1d-28e3-448a-86ec-cea21871fd26") : secret "cluster-autoscaler-operator-cert" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408298 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408321 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408315604 +0000 UTC m=+137.218604933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: I1014 13:08:15.408299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408338 4740 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: I1014 13:08:15.408378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408400 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls podName:b51ef0bc-8b0e-4fab-b101-660ed408924c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408385735 +0000 UTC m=+137.218675094 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls") pod "machine-api-operator-9dbb96f7-s66vj" (UID: "b51ef0bc-8b0e-4fab-b101-660ed408924c") : secret "machine-api-operator-tls" not found Oct 14 13:08:15.408660 master-1 kubenswrapper[4740]: E1014 13:08:15.408123 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408464 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408451388 +0000 UTC m=+137.218740747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408341 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: I1014 13:08:15.408505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408557 4740 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408594 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408501899 +0000 UTC m=+137.218791258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408603 4740 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408625 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls podName:b1a35e1e-333f-480c-b1d6-059475700627 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408617612 +0000 UTC m=+137.218906941 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls") pod "cluster-image-registry-operator-6b8674d7ff-gspqw" (UID: "b1a35e1e-333f-480c-b1d6-059475700627") : secret "image-registry-operator-tls" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408714 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert podName:bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.408704524 +0000 UTC m=+137.218993853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert") pod "cluster-baremetal-operator-6c8fbf4498-kcckh" (UID: "bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1") : secret "cluster-baremetal-webhook-server-cert" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: I1014 13:08:15.408808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.408989 4740 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:15.409638 master-1 kubenswrapper[4740]: E1014 13:08:15.409065 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls podName:a4ab71e1-9b1f-42ee-8abb-8f998e3cae74 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.409043402 +0000 UTC m=+137.219332771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-84f9cbd5d9-n87md" (UID: "a4ab71e1-9b1f-42ee-8abb-8f998e3cae74") : secret "control-plane-machine-set-operator-tls" not found Oct 14 13:08:15.475989 master-1 kubenswrapper[4740]: I1014 13:08:15.475915 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:15.486954 master-1 kubenswrapper[4740]: I1014 13:08:15.486897 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:15.509842 master-1 kubenswrapper[4740]: I1014 13:08:15.509787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:15.509842 master-1 kubenswrapper[4740]: I1014 13:08:15.509836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:15.510115 master-1 kubenswrapper[4740]: I1014 13:08:15.509894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:15.510115 master-1 kubenswrapper[4740]: I1014 13:08:15.509935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:15.510115 master-1 kubenswrapper[4740]: I1014 13:08:15.510023 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:15.510115 master-1 kubenswrapper[4740]: E1014 13:08:15.510040 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510142 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.510105967 +0000 UTC m=+137.320395336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510149 4740 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510183 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510205 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls podName:910af03d-893a-443d-b6ed-fe21c26951f4 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.510189939 +0000 UTC m=+137.320479308 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls") pod "dns-operator-7769d9677-nh2qc" (UID: "910af03d-893a-443d-b6ed-fe21c26951f4") : secret "metrics-tls" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: I1014 13:08:15.510056 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510247 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.51021728 +0000 UTC m=+137.320506609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510041 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510277 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.510270161 +0000 UTC m=+137.320559490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: E1014 13:08:15.510285 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:08:15.510385 master-1 kubenswrapper[4740]: I1014 13:08:15.510322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:15.511053 master-1 kubenswrapper[4740]: E1014 13:08:15.510370 4740 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Oct 14 13:08:15.511053 master-1 kubenswrapper[4740]: E1014 13:08:15.510455 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.510416824 +0000 UTC m=+137.320706183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:08:15.511053 master-1 kubenswrapper[4740]: E1014 13:08:15.510294 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:08:15.511053 master-1 kubenswrapper[4740]: E1014 13:08:15.510552 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls podName:1d68f537-be68-4623-bded-e5d7fb5c3573 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.510515447 +0000 UTC m=+137.320804996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls") pod "machine-approver-7876f99457-kpq7g" (UID: "1d68f537-be68-4623-bded-e5d7fb5c3573") : secret "machine-approver-tls" not found Oct 14 13:08:15.511053 master-1 kubenswrapper[4740]: E1014 13:08:15.510585 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:31.510566398 +0000 UTC m=+137.320855947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:08:15.611551 master-1 kubenswrapper[4740]: I1014 13:08:15.611468 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") pod \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " Oct 14 13:08:15.611551 master-1 kubenswrapper[4740]: I1014 13:08:15.611518 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2jpf\" (UniqueName: \"kubernetes.io/projected/d937f4ea-9e12-44a6-8fcf-b380421d36ae-kube-api-access-j2jpf\") pod \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " Oct 14 13:08:15.611551 master-1 kubenswrapper[4740]: I1014 13:08:15.611561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") pod \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " Oct 14 13:08:15.612932 master-1 kubenswrapper[4740]: I1014 13:08:15.612320 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d937f4ea-9e12-44a6-8fcf-b380421d36ae" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:08:15.612932 master-1 kubenswrapper[4740]: I1014 13:08:15.612344 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config" (OuterVolumeSpecName: "config") pod "d937f4ea-9e12-44a6-8fcf-b380421d36ae" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:08:15.619407 master-1 kubenswrapper[4740]: I1014 13:08:15.619325 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d937f4ea-9e12-44a6-8fcf-b380421d36ae-kube-api-access-j2jpf" (OuterVolumeSpecName: "kube-api-access-j2jpf") pod "d937f4ea-9e12-44a6-8fcf-b380421d36ae" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae"). InnerVolumeSpecName "kube-api-access-j2jpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:08:15.712966 master-1 kubenswrapper[4740]: I1014 13:08:15.712879 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:15.712966 master-1 kubenswrapper[4740]: I1014 13:08:15.712923 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2jpf\" (UniqueName: \"kubernetes.io/projected/d937f4ea-9e12-44a6-8fcf-b380421d36ae-kube-api-access-j2jpf\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:15.712966 master-1 kubenswrapper[4740]: I1014 13:08:15.712937 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:15.814023 master-1 kubenswrapper[4740]: I1014 13:08:15.813945 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:15.814023 master-1 kubenswrapper[4740]: I1014 13:08:15.814020 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca\") pod \"controller-manager-5d9b59775c-x2cz2\" (UID: \"d937f4ea-9e12-44a6-8fcf-b380421d36ae\") " pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:15.814339 master-1 kubenswrapper[4740]: E1014 13:08:15.814180 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:15.814339 master-1 kubenswrapper[4740]: E1014 13:08:15.814263 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:19.814245866 +0000 UTC m=+125.624535195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : configmap "client-ca" not found Oct 14 13:08:15.814424 master-1 kubenswrapper[4740]: E1014 13:08:15.814342 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:15.814424 master-1 kubenswrapper[4740]: E1014 13:08:15.814372 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert podName:d937f4ea-9e12-44a6-8fcf-b380421d36ae nodeName:}" failed. No retries permitted until 2025-10-14 13:08:19.814360068 +0000 UTC m=+125.624649397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert") pod "controller-manager-5d9b59775c-x2cz2" (UID: "d937f4ea-9e12-44a6-8fcf-b380421d36ae") : secret "serving-cert" not found Oct 14 13:08:16.479860 master-1 kubenswrapper[4740]: I1014 13:08:16.479785 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9b59775c-x2cz2" Oct 14 13:08:16.550945 master-1 kubenswrapper[4740]: I1014 13:08:16.550886 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bcf7659b-pckjm"] Oct 14 13:08:16.552044 master-1 kubenswrapper[4740]: I1014 13:08:16.551606 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.554331 master-1 kubenswrapper[4740]: I1014 13:08:16.554300 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 13:08:16.554440 master-1 kubenswrapper[4740]: I1014 13:08:16.554365 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 13:08:16.554523 master-1 kubenswrapper[4740]: I1014 13:08:16.554488 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:08:16.554939 master-1 kubenswrapper[4740]: I1014 13:08:16.554897 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 13:08:16.555068 master-1 kubenswrapper[4740]: I1014 13:08:16.554870 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 13:08:16.555213 master-1 kubenswrapper[4740]: I1014 13:08:16.555112 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-x2cz2"] Oct 14 13:08:16.559100 master-1 kubenswrapper[4740]: I1014 13:08:16.559059 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d9b59775c-x2cz2"] Oct 14 13:08:16.561576 master-1 kubenswrapper[4740]: I1014 13:08:16.561516 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 13:08:16.562886 master-1 kubenswrapper[4740]: I1014 13:08:16.562811 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bcf7659b-pckjm"] Oct 14 13:08:16.646022 master-1 kubenswrapper[4740]: I1014 13:08:16.645899 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-proxy-ca-bundles\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.647887 master-1 kubenswrapper[4740]: I1014 13:08:16.646166 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.647887 master-1 kubenswrapper[4740]: I1014 13:08:16.646596 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.647887 master-1 kubenswrapper[4740]: I1014 13:08:16.646725 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-config\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.647887 master-1 kubenswrapper[4740]: I1014 13:08:16.646810 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2hvc\" (UniqueName: \"kubernetes.io/projected/686cb294-f678-4e26-9305-2756573cadb8-kube-api-access-s2hvc\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.647887 master-1 kubenswrapper[4740]: I1014 13:08:16.647023 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d937f4ea-9e12-44a6-8fcf-b380421d36ae-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:16.647887 master-1 kubenswrapper[4740]: I1014 13:08:16.647075 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d937f4ea-9e12-44a6-8fcf-b380421d36ae-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:16.748188 master-1 kubenswrapper[4740]: I1014 13:08:16.748110 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.748465 master-1 kubenswrapper[4740]: I1014 13:08:16.748205 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:16.748465 master-1 kubenswrapper[4740]: I1014 13:08:16.748316 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:16.748465 master-1 kubenswrapper[4740]: I1014 13:08:16.748353 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.748465 master-1 kubenswrapper[4740]: I1014 13:08:16.748394 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-config\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.748465 master-1 kubenswrapper[4740]: E1014 13:08:16.748314 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:16.748714 master-1 kubenswrapper[4740]: I1014 13:08:16.748523 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2hvc\" (UniqueName: \"kubernetes.io/projected/686cb294-f678-4e26-9305-2756573cadb8-kube-api-access-s2hvc\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.748714 master-1 kubenswrapper[4740]: E1014 13:08:16.748558 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:17.248524117 +0000 UTC m=+123.058813486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:16.748714 master-1 kubenswrapper[4740]: E1014 13:08:16.748561 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:16.748714 master-1 kubenswrapper[4740]: E1014 13:08:16.748434 4740 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:16.748915 master-1 kubenswrapper[4740]: E1014 13:08:16.748725 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:20.748690991 +0000 UTC m=+126.558980530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:16.748915 master-1 kubenswrapper[4740]: I1014 13:08:16.748854 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-proxy-ca-bundles\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.749025 master-1 kubenswrapper[4740]: E1014 13:08:16.748998 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:16.749080 master-1 kubenswrapper[4740]: E1014 13:08:16.749061 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:17.249040409 +0000 UTC m=+123.059329998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : secret "serving-cert" not found Oct 14 13:08:16.749134 master-1 kubenswrapper[4740]: E1014 13:08:16.749101 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:20.74908589 +0000 UTC m=+126.559375489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : secret "serving-cert" not found Oct 14 13:08:16.750811 master-1 kubenswrapper[4740]: I1014 13:08:16.750752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-proxy-ca-bundles\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.751075 master-1 kubenswrapper[4740]: I1014 13:08:16.751021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-config\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.783626 master-1 kubenswrapper[4740]: I1014 13:08:16.783557 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2hvc\" (UniqueName: \"kubernetes.io/projected/686cb294-f678-4e26-9305-2756573cadb8-kube-api-access-s2hvc\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:16.950538 master-1 kubenswrapper[4740]: I1014 13:08:16.950491 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d937f4ea-9e12-44a6-8fcf-b380421d36ae" path="/var/lib/kubelet/pods/d937f4ea-9e12-44a6-8fcf-b380421d36ae/volumes" Oct 14 13:08:17.257101 master-1 kubenswrapper[4740]: I1014 13:08:17.257058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:17.257329 master-1 kubenswrapper[4740]: I1014 13:08:17.257192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:17.257329 master-1 kubenswrapper[4740]: E1014 13:08:17.257270 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:17.257407 master-1 kubenswrapper[4740]: E1014 13:08:17.257372 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:18.257349388 +0000 UTC m=+124.067638727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:17.257450 master-1 kubenswrapper[4740]: E1014 13:08:17.257403 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:17.257531 master-1 kubenswrapper[4740]: E1014 13:08:17.257511 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:18.257492702 +0000 UTC m=+124.067782031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : secret "serving-cert" not found Oct 14 13:08:18.270035 master-1 kubenswrapper[4740]: I1014 13:08:18.269961 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:18.270872 master-1 kubenswrapper[4740]: I1014 13:08:18.270187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:18.270872 master-1 kubenswrapper[4740]: E1014 13:08:18.270328 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:18.270872 master-1 kubenswrapper[4740]: E1014 13:08:18.270412 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:18.270872 master-1 kubenswrapper[4740]: E1014 13:08:18.270453 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:20.270401691 +0000 UTC m=+126.080691020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:18.270872 master-1 kubenswrapper[4740]: E1014 13:08:18.270599 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:20.270574105 +0000 UTC m=+126.080863434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : secret "serving-cert" not found Oct 14 13:08:18.367839 master-1 kubenswrapper[4740]: I1014 13:08:18.367761 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:08:20.296255 master-1 kubenswrapper[4740]: I1014 13:08:20.295888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:20.296822 master-1 kubenswrapper[4740]: E1014 13:08:20.296025 4740 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:20.296822 master-1 kubenswrapper[4740]: E1014 13:08:20.296403 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:24.296384051 +0000 UTC m=+130.106673380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : secret "serving-cert" not found Oct 14 13:08:20.296822 master-1 kubenswrapper[4740]: I1014 13:08:20.296555 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:20.296822 master-1 kubenswrapper[4740]: E1014 13:08:20.296795 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:20.297018 master-1 kubenswrapper[4740]: E1014 13:08:20.296912 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:24.296885343 +0000 UTC m=+130.107174712 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:20.802789 master-1 kubenswrapper[4740]: I1014 13:08:20.802484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:20.802789 master-1 kubenswrapper[4740]: I1014 13:08:20.802573 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:20.802789 master-1 kubenswrapper[4740]: E1014 13:08:20.802673 4740 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:20.802789 master-1 kubenswrapper[4740]: E1014 13:08:20.802712 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:20.802789 master-1 kubenswrapper[4740]: E1014 13:08:20.802778 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:28.802750282 +0000 UTC m=+134.613039651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : secret "serving-cert" not found Oct 14 13:08:20.802789 master-1 kubenswrapper[4740]: E1014 13:08:20.802807 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:28.802794903 +0000 UTC m=+134.613084272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:21.506852 master-1 kubenswrapper[4740]: I1014 13:08:21.506335 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" event={"ID":"ec50d087-259f-45c0-a15a-7fe949ae66dd","Type":"ContainerStarted","Data":"216b13d5dbb6d6de55f0908c7858dde15ec479860670d3ed647a6491b5a2bb13"} Oct 14 13:08:21.511639 master-1 kubenswrapper[4740]: I1014 13:08:21.511573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" event={"ID":"24d7cccd-3100-4c4f-9303-fc57993b465e","Type":"ContainerStarted","Data":"f9c246644b612436343a8707c550c7c44e4b0bad27bf2f5a48fa4db7fd206e5e"} Oct 14 13:08:21.513377 master-1 kubenswrapper[4740]: I1014 13:08:21.513334 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" event={"ID":"97b0a691-fe82-46b1-9f04-671aed7e10be","Type":"ContainerStarted","Data":"a2c84632a83edc5bdf1990821861bcf4fc01584beaa995f3da01e736f3b922bc"} Oct 14 13:08:21.517676 master-1 kubenswrapper[4740]: I1014 13:08:21.517107 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" event={"ID":"534fcd65-38f8-4d39-b4de-d7b2819318c7","Type":"ContainerStarted","Data":"dffb0dbfd0dd6b154c3c7b95d2a9f804bfde96a4971185a307602b1b3a7fc419"} Oct 14 13:08:21.522357 master-1 kubenswrapper[4740]: I1014 13:08:21.521900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" event={"ID":"f22c13e5-9b56-4f0c-a17a-677ba07226ff","Type":"ContainerStarted","Data":"d046d2e50f81162a1d671addbe36ccd4575c1b224fffcac736f18b02381763b4"} Oct 14 13:08:21.524835 master-1 kubenswrapper[4740]: I1014 13:08:21.524776 4740 generic.go:334] "Generic (PLEG): container finished" podID="f24e44b3-ee1b-4452-ace9-9da83358c982" containerID="486d1c7793079fd8e15bb76874099efcbefc5530060963291d8dcd62d879e12b" exitCode=0 Oct 14 13:08:21.524835 master-1 kubenswrapper[4740]: I1014 13:08:21.524821 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/master-1-debug-qq2pg" event={"ID":"f24e44b3-ee1b-4452-ace9-9da83358c982","Type":"ContainerDied","Data":"486d1c7793079fd8e15bb76874099efcbefc5530060963291d8dcd62d879e12b"} Oct 14 13:08:21.533272 master-1 kubenswrapper[4740]: I1014 13:08:21.533180 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" podStartSLOduration=69.322858812 podStartE2EDuration="1m29.5331689s" podCreationTimestamp="2025-10-14 13:06:52 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.933505185 +0000 UTC m=+106.743794514" lastFinishedPulling="2025-10-14 13:08:21.143815263 +0000 UTC m=+126.954104602" observedRunningTime="2025-10-14 13:08:21.530811462 +0000 UTC m=+127.341100801" watchObservedRunningTime="2025-10-14 13:08:21.5331689 +0000 UTC m=+127.343458229" Oct 14 13:08:21.553268 master-1 kubenswrapper[4740]: I1014 13:08:21.552071 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt" podStartSLOduration=1.061371441 podStartE2EDuration="10.55205316s" podCreationTimestamp="2025-10-14 13:08:11 +0000 UTC" firstStartedPulling="2025-10-14 13:08:11.625702404 +0000 UTC m=+117.435991733" lastFinishedPulling="2025-10-14 13:08:21.116384093 +0000 UTC m=+126.926673452" observedRunningTime="2025-10-14 13:08:21.54956151 +0000 UTC m=+127.359850839" watchObservedRunningTime="2025-10-14 13:08:21.55205316 +0000 UTC m=+127.362342489" Oct 14 13:08:21.564115 master-1 kubenswrapper[4740]: I1014 13:08:21.564048 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" podStartSLOduration=69.157864748 podStartE2EDuration="1m29.564023683s" podCreationTimestamp="2025-10-14 13:06:52 +0000 UTC" firstStartedPulling="2025-10-14 13:08:00.71439416 +0000 UTC m=+106.524683489" lastFinishedPulling="2025-10-14 13:08:21.120553065 +0000 UTC m=+126.930842424" observedRunningTime="2025-10-14 13:08:21.563569182 +0000 UTC m=+127.373858551" watchObservedRunningTime="2025-10-14 13:08:21.564023683 +0000 UTC m=+127.374313042" Oct 14 13:08:21.582388 master-1 kubenswrapper[4740]: I1014 13:08:21.579509 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" podStartSLOduration=67.499030471 podStartE2EDuration="1m27.57948592s" podCreationTimestamp="2025-10-14 13:06:54 +0000 UTC" firstStartedPulling="2025-10-14 13:08:01.080829899 +0000 UTC m=+106.891119238" lastFinishedPulling="2025-10-14 13:08:21.161285358 +0000 UTC m=+126.971574687" observedRunningTime="2025-10-14 13:08:21.575619305 +0000 UTC m=+127.385908634" watchObservedRunningTime="2025-10-14 13:08:21.57948592 +0000 UTC m=+127.389775259" Oct 14 13:08:21.599717 master-1 kubenswrapper[4740]: I1014 13:08:21.599061 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" podStartSLOduration=67.45840467 podStartE2EDuration="1m27.599045187s" podCreationTimestamp="2025-10-14 13:06:54 +0000 UTC" firstStartedPulling="2025-10-14 13:08:01.004160429 +0000 UTC m=+106.814449758" lastFinishedPulling="2025-10-14 13:08:21.144800926 +0000 UTC m=+126.955090275" observedRunningTime="2025-10-14 13:08:21.598897284 +0000 UTC m=+127.409186623" watchObservedRunningTime="2025-10-14 13:08:21.599045187 +0000 UTC m=+127.409334506" Oct 14 13:08:21.613279 master-1 kubenswrapper[4740]: I1014 13:08:21.612339 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["assisted-installer/master-1-debug-qq2pg"] Oct 14 13:08:21.614651 master-1 kubenswrapper[4740]: I1014 13:08:21.614482 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["assisted-installer/master-1-debug-qq2pg"] Oct 14 13:08:22.492184 master-1 kubenswrapper[4740]: I1014 13:08:22.492013 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5c6d48559d-v4vd9"] Oct 14 13:08:22.492552 master-1 kubenswrapper[4740]: E1014 13:08:22.492375 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24e44b3-ee1b-4452-ace9-9da83358c982" containerName="container-00" Oct 14 13:08:22.492552 master-1 kubenswrapper[4740]: I1014 13:08:22.492403 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24e44b3-ee1b-4452-ace9-9da83358c982" containerName="container-00" Oct 14 13:08:22.492552 master-1 kubenswrapper[4740]: I1014 13:08:22.492555 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24e44b3-ee1b-4452-ace9-9da83358c982" containerName="container-00" Oct 14 13:08:22.493838 master-1 kubenswrapper[4740]: I1014 13:08:22.493786 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.497767 master-1 kubenswrapper[4740]: I1014 13:08:22.497700 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Oct 14 13:08:22.498205 master-1 kubenswrapper[4740]: I1014 13:08:22.498134 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 13:08:22.498205 master-1 kubenswrapper[4740]: I1014 13:08:22.498196 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 13:08:22.500627 master-1 kubenswrapper[4740]: I1014 13:08:22.500551 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Oct 14 13:08:22.500627 master-1 kubenswrapper[4740]: I1014 13:08:22.500621 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 13:08:22.500845 master-1 kubenswrapper[4740]: I1014 13:08:22.500704 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 13:08:22.500845 master-1 kubenswrapper[4740]: I1014 13:08:22.500735 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 13:08:22.500845 master-1 kubenswrapper[4740]: I1014 13:08:22.500558 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 13:08:22.500845 master-1 kubenswrapper[4740]: I1014 13:08:22.500573 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 13:08:22.506309 master-1 kubenswrapper[4740]: I1014 13:08:22.506214 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5c6d48559d-v4vd9"] Oct 14 13:08:22.510465 master-1 kubenswrapper[4740]: I1014 13:08:22.510402 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.521826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-audit-dir\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.521928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-node-pullsecrets\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522041 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwbvg\" (UniqueName: \"kubernetes.io/projected/5fd32983-7bea-471a-b6a6-36c25603a68c-kube-api-access-nwbvg\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-image-import-ca\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522190 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-client\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522224 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-serving-cert\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-config\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522442 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-serving-ca\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522610 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-encryption-config\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.522764 master-1 kubenswrapper[4740]: I1014 13:08:22.522647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-trusted-ca-bundle\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.536904 master-1 kubenswrapper[4740]: I1014 13:08:22.536833 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/cluster-olm-operator/0.log" Oct 14 13:08:22.538162 master-1 kubenswrapper[4740]: I1014 13:08:22.538118 4740 generic.go:334] "Generic (PLEG): container finished" podID="f22c13e5-9b56-4f0c-a17a-677ba07226ff" containerID="d046d2e50f81162a1d671addbe36ccd4575c1b224fffcac736f18b02381763b4" exitCode=255 Oct 14 13:08:22.538384 master-1 kubenswrapper[4740]: I1014 13:08:22.538224 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" event={"ID":"f22c13e5-9b56-4f0c-a17a-677ba07226ff","Type":"ContainerDied","Data":"d046d2e50f81162a1d671addbe36ccd4575c1b224fffcac736f18b02381763b4"} Oct 14 13:08:22.539004 master-1 kubenswrapper[4740]: I1014 13:08:22.538765 4740 scope.go:117] "RemoveContainer" containerID="d046d2e50f81162a1d671addbe36ccd4575c1b224fffcac736f18b02381763b4" Oct 14 13:08:22.540788 master-1 kubenswrapper[4740]: I1014 13:08:22.540496 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/0.log" Oct 14 13:08:22.540788 master-1 kubenswrapper[4740]: I1014 13:08:22.540541 4740 generic.go:334] "Generic (PLEG): container finished" podID="97b0a691-fe82-46b1-9f04-671aed7e10be" containerID="a2c84632a83edc5bdf1990821861bcf4fc01584beaa995f3da01e736f3b922bc" exitCode=255 Oct 14 13:08:22.540788 master-1 kubenswrapper[4740]: I1014 13:08:22.540768 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" event={"ID":"97b0a691-fe82-46b1-9f04-671aed7e10be","Type":"ContainerDied","Data":"a2c84632a83edc5bdf1990821861bcf4fc01584beaa995f3da01e736f3b922bc"} Oct 14 13:08:22.541336 master-1 kubenswrapper[4740]: I1014 13:08:22.541296 4740 scope.go:117] "RemoveContainer" containerID="a2c84632a83edc5bdf1990821861bcf4fc01584beaa995f3da01e736f3b922bc" Oct 14 13:08:22.574607 master-1 kubenswrapper[4740]: I1014 13:08:22.574562 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:22.623879 master-1 kubenswrapper[4740]: I1014 13:08:22.623817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgfm4\" (UniqueName: \"kubernetes.io/projected/f24e44b3-ee1b-4452-ace9-9da83358c982-kube-api-access-sgfm4\") pod \"f24e44b3-ee1b-4452-ace9-9da83358c982\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " Oct 14 13:08:22.624006 master-1 kubenswrapper[4740]: I1014 13:08:22.623914 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f24e44b3-ee1b-4452-ace9-9da83358c982-host\") pod \"f24e44b3-ee1b-4452-ace9-9da83358c982\" (UID: \"f24e44b3-ee1b-4452-ace9-9da83358c982\") " Oct 14 13:08:22.624161 master-1 kubenswrapper[4740]: I1014 13:08:22.624113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-encryption-config\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.624265 master-1 kubenswrapper[4740]: I1014 13:08:22.624164 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-trusted-ca-bundle\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.624560 master-1 kubenswrapper[4740]: I1014 13:08:22.624485 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-audit-dir\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.624688 master-1 kubenswrapper[4740]: I1014 13:08:22.624646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-node-pullsecrets\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.624777 master-1 kubenswrapper[4740]: I1014 13:08:22.624733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-audit-dir\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.624898 master-1 kubenswrapper[4740]: I1014 13:08:22.624860 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-node-pullsecrets\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.624898 master-1 kubenswrapper[4740]: I1014 13:08:22.624870 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwbvg\" (UniqueName: \"kubernetes.io/projected/5fd32983-7bea-471a-b6a6-36c25603a68c-kube-api-access-nwbvg\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625024 master-1 kubenswrapper[4740]: I1014 13:08:22.624993 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-image-import-ca\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625068 master-1 kubenswrapper[4740]: I1014 13:08:22.625029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625068 master-1 kubenswrapper[4740]: I1014 13:08:22.625053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-client\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625138 master-1 kubenswrapper[4740]: I1014 13:08:22.625074 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-serving-cert\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625255 master-1 kubenswrapper[4740]: I1014 13:08:22.625194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-config\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625332 master-1 kubenswrapper[4740]: E1014 13:08:22.625267 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 14 13:08:22.625332 master-1 kubenswrapper[4740]: I1014 13:08:22.625281 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-serving-ca\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.625421 master-1 kubenswrapper[4740]: E1014 13:08:22.625345 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit podName:5fd32983-7bea-471a-b6a6-36c25603a68c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:23.125317831 +0000 UTC m=+128.935607270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit") pod "apiserver-5c6d48559d-v4vd9" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c") : configmap "audit-0" not found Oct 14 13:08:22.625619 master-1 kubenswrapper[4740]: I1014 13:08:22.625577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f24e44b3-ee1b-4452-ace9-9da83358c982-host" (OuterVolumeSpecName: "host") pod "f24e44b3-ee1b-4452-ace9-9da83358c982" (UID: "f24e44b3-ee1b-4452-ace9-9da83358c982"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:08:22.626416 master-1 kubenswrapper[4740]: I1014 13:08:22.626156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-serving-ca\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.626416 master-1 kubenswrapper[4740]: I1014 13:08:22.626166 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-config\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.626416 master-1 kubenswrapper[4740]: I1014 13:08:22.626264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-image-import-ca\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.627283 master-1 kubenswrapper[4740]: I1014 13:08:22.626729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-trusted-ca-bundle\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.631433 master-1 kubenswrapper[4740]: I1014 13:08:22.631368 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24e44b3-ee1b-4452-ace9-9da83358c982-kube-api-access-sgfm4" (OuterVolumeSpecName: "kube-api-access-sgfm4") pod "f24e44b3-ee1b-4452-ace9-9da83358c982" (UID: "f24e44b3-ee1b-4452-ace9-9da83358c982"). InnerVolumeSpecName "kube-api-access-sgfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:08:22.632591 master-1 kubenswrapper[4740]: I1014 13:08:22.632539 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-encryption-config\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.632733 master-1 kubenswrapper[4740]: I1014 13:08:22.632685 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-serving-cert\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.635732 master-1 kubenswrapper[4740]: I1014 13:08:22.635697 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-client\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.651348 master-1 kubenswrapper[4740]: I1014 13:08:22.651300 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwbvg\" (UniqueName: \"kubernetes.io/projected/5fd32983-7bea-471a-b6a6-36c25603a68c-kube-api-access-nwbvg\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:22.727236 master-1 kubenswrapper[4740]: I1014 13:08:22.726866 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f24e44b3-ee1b-4452-ace9-9da83358c982-host\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:22.727236 master-1 kubenswrapper[4740]: I1014 13:08:22.727214 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgfm4\" (UniqueName: \"kubernetes.io/projected/f24e44b3-ee1b-4452-ace9-9da83358c982-kube-api-access-sgfm4\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:22.957988 master-1 kubenswrapper[4740]: I1014 13:08:22.957191 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f24e44b3-ee1b-4452-ace9-9da83358c982" path="/var/lib/kubelet/pods/f24e44b3-ee1b-4452-ace9-9da83358c982/volumes" Oct 14 13:08:23.132653 master-1 kubenswrapper[4740]: I1014 13:08:23.132568 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:23.133016 master-1 kubenswrapper[4740]: E1014 13:08:23.132974 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 14 13:08:23.133106 master-1 kubenswrapper[4740]: E1014 13:08:23.133077 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit podName:5fd32983-7bea-471a-b6a6-36c25603a68c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:24.133046697 +0000 UTC m=+129.943336066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit") pod "apiserver-5c6d48559d-v4vd9" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c") : configmap "audit-0" not found Oct 14 13:08:23.548958 master-1 kubenswrapper[4740]: I1014 13:08:23.548886 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/cluster-olm-operator/0.log" Oct 14 13:08:23.553005 master-1 kubenswrapper[4740]: I1014 13:08:23.552962 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl" event={"ID":"f22c13e5-9b56-4f0c-a17a-677ba07226ff","Type":"ContainerStarted","Data":"df500717900afab0f253eef4dcf3130a437de0222b6c2617f9fc183ff0394fef"} Oct 14 13:08:23.555529 master-1 kubenswrapper[4740]: I1014 13:08:23.555495 4740 scope.go:117] "RemoveContainer" containerID="486d1c7793079fd8e15bb76874099efcbefc5530060963291d8dcd62d879e12b" Oct 14 13:08:23.555637 master-1 kubenswrapper[4740]: I1014 13:08:23.555609 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/master-1-debug-qq2pg" Oct 14 13:08:23.559425 master-1 kubenswrapper[4740]: I1014 13:08:23.559390 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/0.log" Oct 14 13:08:23.559491 master-1 kubenswrapper[4740]: I1014 13:08:23.559436 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-66df44bc95-gldlr" event={"ID":"97b0a691-fe82-46b1-9f04-671aed7e10be","Type":"ContainerStarted","Data":"74354a67bdda9a66aeb19c687e3657f5b807aeb4fe5f4bae4620c388d908a93f"} Oct 14 13:08:24.148148 master-1 kubenswrapper[4740]: I1014 13:08:24.148052 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:24.148460 master-1 kubenswrapper[4740]: E1014 13:08:24.148348 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 14 13:08:24.148554 master-1 kubenswrapper[4740]: E1014 13:08:24.148472 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit podName:5fd32983-7bea-471a-b6a6-36c25603a68c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:26.148444415 +0000 UTC m=+131.958733784 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit") pod "apiserver-5c6d48559d-v4vd9" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c") : configmap "audit-0" not found Oct 14 13:08:24.350695 master-1 kubenswrapper[4740]: I1014 13:08:24.350617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:24.350956 master-1 kubenswrapper[4740]: I1014 13:08:24.350785 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:24.350956 master-1 kubenswrapper[4740]: E1014 13:08:24.350811 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:24.350956 master-1 kubenswrapper[4740]: E1014 13:08:24.350949 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:32.350912214 +0000 UTC m=+138.161201583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:24.356882 master-1 kubenswrapper[4740]: I1014 13:08:24.356817 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:26.177595 master-1 kubenswrapper[4740]: I1014 13:08:26.177477 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:26.178833 master-1 kubenswrapper[4740]: E1014 13:08:26.177689 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 14 13:08:26.178833 master-1 kubenswrapper[4740]: E1014 13:08:26.177788 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit podName:5fd32983-7bea-471a-b6a6-36c25603a68c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:30.177762596 +0000 UTC m=+135.988051995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit") pod "apiserver-5c6d48559d-v4vd9" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c") : configmap "audit-0" not found Oct 14 13:08:28.512217 master-1 kubenswrapper[4740]: I1014 13:08:28.512114 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:08:28.515519 master-1 kubenswrapper[4740]: I1014 13:08:28.515455 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 14 13:08:28.525443 master-1 kubenswrapper[4740]: I1014 13:08:28.525396 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 14 13:08:28.539379 master-1 kubenswrapper[4740]: I1014 13:08:28.539019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbd6g\" (UniqueName: \"kubernetes.io/projected/a745a9ed-4507-491b-b50f-7a5e3837b928-kube-api-access-mbd6g\") pod \"network-check-target-sndvg\" (UID: \"a745a9ed-4507-491b-b50f-7a5e3837b928\") " pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:08:28.562556 master-1 kubenswrapper[4740]: I1014 13:08:28.561296 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:08:28.813977 master-1 kubenswrapper[4740]: I1014 13:08:28.813800 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-sndvg"] Oct 14 13:08:28.818284 master-1 kubenswrapper[4740]: I1014 13:08:28.818200 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:28.818479 master-1 kubenswrapper[4740]: E1014 13:08:28.818363 4740 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Oct 14 13:08:28.818479 master-1 kubenswrapper[4740]: I1014 13:08:28.818401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:28.818479 master-1 kubenswrapper[4740]: E1014 13:08:28.818420 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:44.818404789 +0000 UTC m=+150.628694118 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : secret "serving-cert" not found Oct 14 13:08:28.818763 master-1 kubenswrapper[4740]: E1014 13:08:28.818547 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:28.818763 master-1 kubenswrapper[4740]: E1014 13:08:28.818662 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:44.818633955 +0000 UTC m=+150.628923314 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:28.821279 master-1 kubenswrapper[4740]: W1014 13:08:28.821178 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda745a9ed_4507_491b_b50f_7a5e3837b928.slice/crio-7979aecf9063e15e3e56b860a23477168cee0c1e552be2f7afc2926886eb2e02 WatchSource:0}: Error finding container 7979aecf9063e15e3e56b860a23477168cee0c1e552be2f7afc2926886eb2e02: Status 404 returned error can't find the container with id 7979aecf9063e15e3e56b860a23477168cee0c1e552be2f7afc2926886eb2e02 Oct 14 13:08:29.592707 master-1 kubenswrapper[4740]: I1014 13:08:29.592218 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-sndvg" event={"ID":"a745a9ed-4507-491b-b50f-7a5e3837b928","Type":"ContainerStarted","Data":"51c6fc6507bfd0f5e1e9c27415de1fd23511a35905e43d3a9247237c3e69f843"} Oct 14 13:08:29.592707 master-1 kubenswrapper[4740]: I1014 13:08:29.592665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-sndvg" event={"ID":"a745a9ed-4507-491b-b50f-7a5e3837b928","Type":"ContainerStarted","Data":"7979aecf9063e15e3e56b860a23477168cee0c1e552be2f7afc2926886eb2e02"} Oct 14 13:08:29.592707 master-1 kubenswrapper[4740]: I1014 13:08:29.592698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:08:29.607633 master-1 kubenswrapper[4740]: I1014 13:08:29.607527 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-sndvg" podStartSLOduration=65.607500379 podStartE2EDuration="1m5.607500379s" podCreationTimestamp="2025-10-14 13:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:29.605628812 +0000 UTC m=+135.415918181" watchObservedRunningTime="2025-10-14 13:08:29.607500379 +0000 UTC m=+135.417789748" Oct 14 13:08:30.247758 master-1 kubenswrapper[4740]: I1014 13:08:30.247650 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:30.248037 master-1 kubenswrapper[4740]: E1014 13:08:30.247923 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 14 13:08:30.248104 master-1 kubenswrapper[4740]: E1014 13:08:30.248057 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit podName:5fd32983-7bea-471a-b6a6-36c25603a68c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:38.248030873 +0000 UTC m=+144.058320232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit") pod "apiserver-5c6d48559d-v4vd9" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c") : configmap "audit-0" not found Oct 14 13:08:31.463857 master-1 kubenswrapper[4740]: I1014 13:08:31.463775 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.463893 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.463934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.463985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: E1014 13:08:31.464047 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464124 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: E1014 13:08:31.464148 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert podName:7be129fe-d04d-4384-a0e9-76b3148a1f3e nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.464121816 +0000 UTC m=+169.274411235 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert") pod "package-server-manager-798cc87f55-j2bjv" (UID: "7be129fe-d04d-4384-a0e9-76b3148a1f3e") : secret "package-server-manager-serving-cert" not found Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464285 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: E1014 13:08:31.464512 4740 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: E1014 13:08:31.464595 4740 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:31.465643 master-1 kubenswrapper[4740]: I1014 13:08:31.464536 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:08:31.466740 master-1 kubenswrapper[4740]: E1014 13:08:31.464629 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls podName:c4ca808a-394d-4a17-ac12-1df264c7ed92 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.464597548 +0000 UTC m=+169.274886907 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls") pod "machine-config-operator-7b75469658-j2dbc" (UID: "c4ca808a-394d-4a17-ac12-1df264c7ed92") : secret "mco-proxy-tls" not found Oct 14 13:08:31.466740 master-1 kubenswrapper[4740]: E1014 13:08:31.464679 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls podName:62ef5e24-de36-454a-a34c-e741a86a6f96 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.46464691 +0000 UTC m=+169.274936279 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-5b5dd85dcc-cxtgh" (UID: "62ef5e24-de36-454a-a34c-e741a86a6f96") : secret "cluster-monitoring-operator-tls" not found Oct 14 13:08:31.466740 master-1 kubenswrapper[4740]: I1014 13:08:31.464739 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:31.471846 master-1 kubenswrapper[4740]: I1014 13:08:31.471743 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a4ab71e1-9b1f-42ee-8abb-8f998e3cae74-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-84f9cbd5d9-n87md\" (UID: \"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74\") " pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:31.472341 master-1 kubenswrapper[4740]: I1014 13:08:31.472285 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b51ef0bc-8b0e-4fab-b101-660ed408924c-machine-api-operator-tls\") pod \"machine-api-operator-9dbb96f7-s66vj\" (UID: \"b51ef0bc-8b0e-4fab-b101-660ed408924c\") " pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:31.472556 master-1 kubenswrapper[4740]: I1014 13:08:31.472508 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:31.472556 master-1 kubenswrapper[4740]: I1014 13:08:31.472546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa31cdd-e051-4987-a1a2-814fc7445e6b-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-5cf49b6487-4cf2d\" (UID: \"1fa31cdd-e051-4987-a1a2-814fc7445e6b\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:31.472838 master-1 kubenswrapper[4740]: I1014 13:08:31.472794 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cert\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:31.472974 master-1 kubenswrapper[4740]: I1014 13:08:31.472806 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6c8fbf4498-kcckh\" (UID: \"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1\") " pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:31.473662 master-1 kubenswrapper[4740]: I1014 13:08:31.473613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab511c1d-28e3-448a-86ec-cea21871fd26-cert\") pod \"cluster-autoscaler-operator-7ff449c7c5-nmpfk\" (UID: \"ab511c1d-28e3-448a-86ec-cea21871fd26\") " pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:31.473852 master-1 kubenswrapper[4740]: I1014 13:08:31.473706 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1a35e1e-333f-480c-b1d6-059475700627-image-registry-operator-tls\") pod \"cluster-image-registry-operator-6b8674d7ff-gspqw\" (UID: \"b1a35e1e-333f-480c-b1d6-059475700627\") " pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:31.473931 master-1 kubenswrapper[4740]: I1014 13:08:31.473899 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/655ad1ce-582a-4728-8bfd-ca4164468de3-apiservice-cert\") pod \"cluster-node-tuning-operator-7866c9bdf4-d4dlj\" (UID: \"655ad1ce-582a-4728-8bfd-ca4164468de3\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:31.475045 master-1 kubenswrapper[4740]: I1014 13:08:31.474999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/398ba6fd-0f8f-46af-b690-61a6eec9176b-metrics-tls\") pod \"ingress-operator-766ddf4575-xhdjt\" (UID: \"398ba6fd-0f8f-46af-b690-61a6eec9176b\") " pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:31.487709 master-1 kubenswrapper[4740]: I1014 13:08:31.487658 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" Oct 14 13:08:31.498588 master-1 kubenswrapper[4740]: I1014 13:08:31.498535 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" Oct 14 13:08:31.508011 master-1 kubenswrapper[4740]: I1014 13:08:31.507955 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" Oct 14 13:08:31.518550 master-1 kubenswrapper[4740]: I1014 13:08:31.518466 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.565466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.565585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.565635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.565699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.565704 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.565804 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs podName:01742ba1-f43b-4ff2-97d5-1a535e925a0f nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.565775074 +0000 UTC m=+169.376064443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs") pod "multus-admission-controller-77b66fddc8-9npgz" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f") : secret "multus-admission-controller-secret" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.565849 4740 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.565895 4740 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.565956 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs podName:ec085d84-4833-4e0b-9e6a-35b983a7059b nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.565927588 +0000 UTC m=+169.376216957 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs") pod "multus-admission-controller-77b66fddc8-mgc7h" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b") : secret "multus-admission-controller-secret" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.565986 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics podName:2a106ff8-388a-4d30-8370-aad661eb4365 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.565972899 +0000 UTC m=+169.376262258 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics") pod "marketplace-operator-c4f798dd4-djh96" (UID: "2a106ff8-388a-4d30-8370-aad661eb4365") : secret "marketplace-operator-metrics" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.566006 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.566061 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert podName:3d292fbb-b49c-4543-993b-738103c7419b nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.566035871 +0000 UTC m=+169.376325240 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert") pod "catalog-operator-f966fb6f8-dwwm2" (UID: "3d292fbb-b49c-4543-993b-738103c7419b") : secret "catalog-operator-serving-cert" not found Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.566092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.566004 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: I1014 13:08:31.566201 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:31.573131 master-1 kubenswrapper[4740]: E1014 13:08:31.566286 4740 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Oct 14 13:08:31.574917 master-1 kubenswrapper[4740]: E1014 13:08:31.566335 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert podName:57526e49-7f51-4a66-8f48-0c485fc1e88f nodeName:}" failed. No retries permitted until 2025-10-14 13:09:03.56632034 +0000 UTC m=+169.376609699 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert") pod "olm-operator-867f8475d9-fl56c" (UID: "57526e49-7f51-4a66-8f48-0c485fc1e88f") : secret "olm-operator-serving-cert" not found Oct 14 13:08:31.574917 master-1 kubenswrapper[4740]: I1014 13:08:31.566290 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:31.574917 master-1 kubenswrapper[4740]: I1014 13:08:31.570686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910af03d-893a-443d-b6ed-fe21c26951f4-metrics-tls\") pod \"dns-operator-7769d9677-nh2qc\" (UID: \"910af03d-893a-443d-b6ed-fe21c26951f4\") " pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:31.574917 master-1 kubenswrapper[4740]: I1014 13:08:31.572421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d68f537-be68-4623-bded-e5d7fb5c3573-machine-approver-tls\") pod \"machine-approver-7876f99457-kpq7g\" (UID: \"1d68f537-be68-4623-bded-e5d7fb5c3573\") " pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:31.601078 master-1 kubenswrapper[4740]: I1014 13:08:31.595594 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" Oct 14 13:08:31.650786 master-1 kubenswrapper[4740]: W1014 13:08:31.650511 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d68f537_be68_4623_bded_e5d7fb5c3573.slice/crio-966619943ff6a7e8c6211f7b4468d3fdd80cea3aeee8d74f91fc653d8bc53571 WatchSource:0}: Error finding container 966619943ff6a7e8c6211f7b4468d3fdd80cea3aeee8d74f91fc653d8bc53571: Status 404 returned error can't find the container with id 966619943ff6a7e8c6211f7b4468d3fdd80cea3aeee8d74f91fc653d8bc53571 Oct 14 13:08:31.673165 master-1 kubenswrapper[4740]: I1014 13:08:31.672894 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" Oct 14 13:08:31.710350 master-1 kubenswrapper[4740]: I1014 13:08:31.707891 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" Oct 14 13:08:31.739975 master-1 kubenswrapper[4740]: I1014 13:08:31.739570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" Oct 14 13:08:31.768972 master-1 kubenswrapper[4740]: I1014 13:08:31.768451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" Oct 14 13:08:31.813411 master-1 kubenswrapper[4740]: I1014 13:08:31.812917 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md"] Oct 14 13:08:31.852485 master-1 kubenswrapper[4740]: I1014 13:08:31.852447 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk"] Oct 14 13:08:31.889256 master-1 kubenswrapper[4740]: I1014 13:08:31.889190 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh"] Oct 14 13:08:31.926898 master-1 kubenswrapper[4740]: I1014 13:08:31.926725 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-7769d9677-nh2qc"] Oct 14 13:08:31.947905 master-1 kubenswrapper[4740]: I1014 13:08:31.947866 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-9dbb96f7-s66vj"] Oct 14 13:08:31.953398 master-1 kubenswrapper[4740]: W1014 13:08:31.953356 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb51ef0bc_8b0e_4fab_b101_660ed408924c.slice/crio-ea2aebf942ecc4a7341b4c166457d0acdd3e44d9def345a72b1f00e28a603b06 WatchSource:0}: Error finding container ea2aebf942ecc4a7341b4c166457d0acdd3e44d9def345a72b1f00e28a603b06: Status 404 returned error can't find the container with id ea2aebf942ecc4a7341b4c166457d0acdd3e44d9def345a72b1f00e28a603b06 Oct 14 13:08:31.960171 master-1 kubenswrapper[4740]: I1014 13:08:31.960109 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj"] Oct 14 13:08:31.962145 master-1 kubenswrapper[4740]: I1014 13:08:31.962094 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw"] Oct 14 13:08:31.966050 master-1 kubenswrapper[4740]: W1014 13:08:31.966000 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod655ad1ce_582a_4728_8bfd_ca4164468de3.slice/crio-5597e065226e5066dcc764b374716bdfffb5df1a5bfe102b093758f6797736e1 WatchSource:0}: Error finding container 5597e065226e5066dcc764b374716bdfffb5df1a5bfe102b093758f6797736e1: Status 404 returned error can't find the container with id 5597e065226e5066dcc764b374716bdfffb5df1a5bfe102b093758f6797736e1 Oct 14 13:08:31.974081 master-1 kubenswrapper[4740]: W1014 13:08:31.974048 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a35e1e_333f_480c_b1d6_059475700627.slice/crio-7c0d78788165fb5c8711c217d573300dcc918ab4affb92bb664959c80b9c4be8 WatchSource:0}: Error finding container 7c0d78788165fb5c8711c217d573300dcc918ab4affb92bb664959c80b9c4be8: Status 404 returned error can't find the container with id 7c0d78788165fb5c8711c217d573300dcc918ab4affb92bb664959c80b9c4be8 Oct 14 13:08:32.007328 master-1 kubenswrapper[4740]: I1014 13:08:32.003360 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt"] Oct 14 13:08:32.007886 master-1 kubenswrapper[4740]: I1014 13:08:32.007192 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d"] Oct 14 13:08:32.011078 master-1 kubenswrapper[4740]: W1014 13:08:32.011025 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod398ba6fd_0f8f_46af_b690_61a6eec9176b.slice/crio-5516535de04611a574fb75ba9746428dc7e25fdaf702eedffef47219d54bf760 WatchSource:0}: Error finding container 5516535de04611a574fb75ba9746428dc7e25fdaf702eedffef47219d54bf760: Status 404 returned error can't find the container with id 5516535de04611a574fb75ba9746428dc7e25fdaf702eedffef47219d54bf760 Oct 14 13:08:32.012991 master-1 kubenswrapper[4740]: W1014 13:08:32.012933 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fa31cdd_e051_4987_a1a2_814fc7445e6b.slice/crio-9d4b6dae08a13ebb490e65ebffa044aaf86e80802fcd6c7bd7da1cfd09e87860 WatchSource:0}: Error finding container 9d4b6dae08a13ebb490e65ebffa044aaf86e80802fcd6c7bd7da1cfd09e87860: Status 404 returned error can't find the container with id 9d4b6dae08a13ebb490e65ebffa044aaf86e80802fcd6c7bd7da1cfd09e87860 Oct 14 13:08:32.376375 master-1 kubenswrapper[4740]: I1014 13:08:32.375751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:32.376375 master-1 kubenswrapper[4740]: E1014 13:08:32.376071 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:32.376375 master-1 kubenswrapper[4740]: E1014 13:08:32.376141 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:48.376119217 +0000 UTC m=+154.186408576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:32.622147 master-1 kubenswrapper[4740]: I1014 13:08:32.622053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" event={"ID":"1d68f537-be68-4623-bded-e5d7fb5c3573","Type":"ContainerStarted","Data":"729a0508038e4b9d1d2019467b7c8e6f8d9a11005fddbccbce1e1f948069c6d4"} Oct 14 13:08:32.622147 master-1 kubenswrapper[4740]: I1014 13:08:32.622113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" event={"ID":"1d68f537-be68-4623-bded-e5d7fb5c3573","Type":"ContainerStarted","Data":"966619943ff6a7e8c6211f7b4468d3fdd80cea3aeee8d74f91fc653d8bc53571"} Oct 14 13:08:32.625799 master-1 kubenswrapper[4740]: I1014 13:08:32.625729 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" event={"ID":"1fa31cdd-e051-4987-a1a2-814fc7445e6b","Type":"ContainerStarted","Data":"ac7d3197f82284e8819e71ad146514fd6f4283a94a8fde346bd750a03a3a5bf5"} Oct 14 13:08:32.625799 master-1 kubenswrapper[4740]: I1014 13:08:32.625779 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" event={"ID":"1fa31cdd-e051-4987-a1a2-814fc7445e6b","Type":"ContainerStarted","Data":"9d4b6dae08a13ebb490e65ebffa044aaf86e80802fcd6c7bd7da1cfd09e87860"} Oct 14 13:08:32.627665 master-1 kubenswrapper[4740]: I1014 13:08:32.627531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" event={"ID":"655ad1ce-582a-4728-8bfd-ca4164468de3","Type":"ContainerStarted","Data":"5597e065226e5066dcc764b374716bdfffb5df1a5bfe102b093758f6797736e1"} Oct 14 13:08:32.629224 master-1 kubenswrapper[4740]: I1014 13:08:32.629124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" event={"ID":"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74","Type":"ContainerStarted","Data":"bfced5d67432b52bc75c1f4856b8dc7ef5d81608acbfa508945798c5cf2f2faa"} Oct 14 13:08:32.631949 master-1 kubenswrapper[4740]: I1014 13:08:32.631842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerStarted","Data":"5516535de04611a574fb75ba9746428dc7e25fdaf702eedffef47219d54bf760"} Oct 14 13:08:32.639304 master-1 kubenswrapper[4740]: I1014 13:08:32.634329 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" event={"ID":"910af03d-893a-443d-b6ed-fe21c26951f4","Type":"ContainerStarted","Data":"14930a3603a44c84a550ee68aea5e80e9c0a8c6b2b3a7499ef1483a3f94a1839"} Oct 14 13:08:32.639304 master-1 kubenswrapper[4740]: I1014 13:08:32.638759 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" event={"ID":"b51ef0bc-8b0e-4fab-b101-660ed408924c","Type":"ContainerStarted","Data":"9598a8a6ed6e2c3b10699c32cc6b219fa40444a98507c7cb970d5b15158c5609"} Oct 14 13:08:32.639304 master-1 kubenswrapper[4740]: I1014 13:08:32.638810 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" event={"ID":"b51ef0bc-8b0e-4fab-b101-660ed408924c","Type":"ContainerStarted","Data":"ea2aebf942ecc4a7341b4c166457d0acdd3e44d9def345a72b1f00e28a603b06"} Oct 14 13:08:32.640420 master-1 kubenswrapper[4740]: I1014 13:08:32.640380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" event={"ID":"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1","Type":"ContainerStarted","Data":"ed5e7ae0c093f20b99c18d14512b58ada34027971211de72bbfb2f0e1970edad"} Oct 14 13:08:32.642597 master-1 kubenswrapper[4740]: I1014 13:08:32.642531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" event={"ID":"ab511c1d-28e3-448a-86ec-cea21871fd26","Type":"ContainerStarted","Data":"62d269e15e0daebf167d3fdc40a7533642f2a0a53ee33c1035fa81f083634d44"} Oct 14 13:08:32.642709 master-1 kubenswrapper[4740]: I1014 13:08:32.642600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" event={"ID":"ab511c1d-28e3-448a-86ec-cea21871fd26","Type":"ContainerStarted","Data":"fb05e70d5a2dbd00c02a7cf2fcbb11575a5df8a533a72bd10e888a903f9d6fc1"} Oct 14 13:08:32.643750 master-1 kubenswrapper[4740]: I1014 13:08:32.643694 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" event={"ID":"b1a35e1e-333f-480c-b1d6-059475700627","Type":"ContainerStarted","Data":"7c0d78788165fb5c8711c217d573300dcc918ab4affb92bb664959c80b9c4be8"} Oct 14 13:08:33.498710 master-1 kubenswrapper[4740]: I1014 13:08:33.498646 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6"] Oct 14 13:08:33.499504 master-1 kubenswrapper[4740]: I1014 13:08:33.499469 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.502349 master-1 kubenswrapper[4740]: I1014 13:08:33.502314 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Oct 14 13:08:33.502961 master-1 kubenswrapper[4740]: I1014 13:08:33.502929 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Oct 14 13:08:33.503034 master-1 kubenswrapper[4740]: I1014 13:08:33.502990 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Oct 14 13:08:33.507979 master-1 kubenswrapper[4740]: I1014 13:08:33.507936 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6"] Oct 14 13:08:33.516204 master-1 kubenswrapper[4740]: I1014 13:08:33.516154 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Oct 14 13:08:33.592182 master-1 kubenswrapper[4740]: I1014 13:08:33.592090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.592633 master-1 kubenswrapper[4740]: I1014 13:08:33.592191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-cache\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.592715 master-1 kubenswrapper[4740]: I1014 13:08:33.592665 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-ca-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.592756 master-1 kubenswrapper[4740]: I1014 13:08:33.592726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx2ht\" (UniqueName: \"kubernetes.io/projected/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-kube-api-access-fx2ht\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.592941 master-1 kubenswrapper[4740]: I1014 13:08:33.592919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-containers\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.592984 master-1 kubenswrapper[4740]: I1014 13:08:33.592973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.601268 master-1 kubenswrapper[4740]: I1014 13:08:33.601210 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz"] Oct 14 13:08:33.602025 master-1 kubenswrapper[4740]: I1014 13:08:33.602000 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.605670 master-1 kubenswrapper[4740]: I1014 13:08:33.605625 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Oct 14 13:08:33.606129 master-1 kubenswrapper[4740]: I1014 13:08:33.606095 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Oct 14 13:08:33.608404 master-1 kubenswrapper[4740]: I1014 13:08:33.608358 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz"] Oct 14 13:08:33.612045 master-1 kubenswrapper[4740]: I1014 13:08:33.612010 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Oct 14 13:08:33.693790 master-1 kubenswrapper[4740]: I1014 13:08:33.693740 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-containers\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.693833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.693860 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-cache\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.693900 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-ca-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.693917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx2ht\" (UniqueName: \"kubernetes.io/projected/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-kube-api-access-fx2ht\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: E1014 13:08:33.693950 4740 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.693963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: E1014 13:08:33.694013 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:34.193995455 +0000 UTC m=+140.004284774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : secret "catalogserver-cert" not found Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.694095 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/180ced15-1fb1-464d-85f2-0bcc0d836dab-cache\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.694182 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/180ced15-1fb1-464d-85f2-0bcc0d836dab-ca-certs\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.694206 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-containers\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.694276 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.694295 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqww\" (UniqueName: \"kubernetes.io/projected/180ced15-1fb1-464d-85f2-0bcc0d836dab-kube-api-access-kmqww\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.694383 master-1 kubenswrapper[4740]: I1014 13:08:33.694344 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-containers\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.694738 master-1 kubenswrapper[4740]: E1014 13:08:33.694397 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:34.194387996 +0000 UTC m=+140.004677325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:33.694738 master-1 kubenswrapper[4740]: I1014 13:08:33.694504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-cache\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.702788 master-1 kubenswrapper[4740]: I1014 13:08:33.702754 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-ca-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.728670 master-1 kubenswrapper[4740]: I1014 13:08:33.728641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx2ht\" (UniqueName: \"kubernetes.io/projected/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-kube-api-access-fx2ht\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: I1014 13:08:33.794890 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqww\" (UniqueName: \"kubernetes.io/projected/180ced15-1fb1-464d-85f2-0bcc0d836dab-kube-api-access-kmqww\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: I1014 13:08:33.794956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-containers\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: I1014 13:08:33.795064 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: I1014 13:08:33.795083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/180ced15-1fb1-464d-85f2-0bcc0d836dab-cache\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: I1014 13:08:33.795110 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/180ced15-1fb1-464d-85f2-0bcc0d836dab-ca-certs\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: I1014 13:08:33.795152 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-containers\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.795384 master-1 kubenswrapper[4740]: E1014 13:08:33.795219 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:08:34.295192282 +0000 UTC m=+140.105481611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:33.795690 master-1 kubenswrapper[4740]: I1014 13:08:33.795589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/180ced15-1fb1-464d-85f2-0bcc0d836dab-cache\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.802764 master-1 kubenswrapper[4740]: I1014 13:08:33.802642 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/180ced15-1fb1-464d-85f2-0bcc0d836dab-ca-certs\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:33.814423 master-1 kubenswrapper[4740]: I1014 13:08:33.814388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqww\" (UniqueName: \"kubernetes.io/projected/180ced15-1fb1-464d-85f2-0bcc0d836dab-kube-api-access-kmqww\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:34.200309 master-1 kubenswrapper[4740]: I1014 13:08:34.200201 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:34.200452 master-1 kubenswrapper[4740]: I1014 13:08:34.200348 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:34.200538 master-1 kubenswrapper[4740]: E1014 13:08:34.200501 4740 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Oct 14 13:08:34.200626 master-1 kubenswrapper[4740]: E1014 13:08:34.200521 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:35.200500752 +0000 UTC m=+141.010790081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:34.200679 master-1 kubenswrapper[4740]: E1014 13:08:34.200637 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:35.200614775 +0000 UTC m=+141.010904164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : secret "catalogserver-cert" not found Oct 14 13:08:34.301814 master-1 kubenswrapper[4740]: I1014 13:08:34.301760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:34.302045 master-1 kubenswrapper[4740]: E1014 13:08:34.301967 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:08:35.301938444 +0000 UTC m=+141.112227793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:35.235882 master-1 kubenswrapper[4740]: I1014 13:08:35.235578 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:35.236635 master-1 kubenswrapper[4740]: I1014 13:08:35.235928 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:35.236635 master-1 kubenswrapper[4740]: E1014 13:08:35.236110 4740 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Oct 14 13:08:35.236635 master-1 kubenswrapper[4740]: E1014 13:08:35.236169 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:37.236150621 +0000 UTC m=+143.046439960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : secret "catalogserver-cert" not found Oct 14 13:08:35.236635 master-1 kubenswrapper[4740]: E1014 13:08:35.236188 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:37.236179472 +0000 UTC m=+143.046468811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:35.337405 master-1 kubenswrapper[4740]: I1014 13:08:35.337309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:35.337653 master-1 kubenswrapper[4740]: E1014 13:08:35.337589 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:08:37.337553863 +0000 UTC m=+143.147843232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:37.262762 master-1 kubenswrapper[4740]: I1014 13:08:37.262686 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:37.263700 master-1 kubenswrapper[4740]: I1014 13:08:37.262861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:37.263700 master-1 kubenswrapper[4740]: E1014 13:08:37.262927 4740 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Oct 14 13:08:37.263700 master-1 kubenswrapper[4740]: E1014 13:08:37.263013 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:41.26299046 +0000 UTC m=+147.073279809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : secret "catalogserver-cert" not found Oct 14 13:08:37.263700 master-1 kubenswrapper[4740]: E1014 13:08:37.263129 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:41.263075182 +0000 UTC m=+147.073364521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:37.364298 master-1 kubenswrapper[4740]: I1014 13:08:37.364166 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:37.364575 master-1 kubenswrapper[4740]: E1014 13:08:37.364362 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:08:41.364335741 +0000 UTC m=+147.174625150 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:38.282133 master-1 kubenswrapper[4740]: I1014 13:08:38.282062 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") pod \"apiserver-5c6d48559d-v4vd9\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:38.283011 master-1 kubenswrapper[4740]: E1014 13:08:38.282544 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Oct 14 13:08:38.283011 master-1 kubenswrapper[4740]: E1014 13:08:38.282771 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit podName:5fd32983-7bea-471a-b6a6-36c25603a68c nodeName:}" failed. No retries permitted until 2025-10-14 13:08:54.28273297 +0000 UTC m=+160.093022309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit") pod "apiserver-5c6d48559d-v4vd9" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c") : configmap "audit-0" not found Oct 14 13:08:38.293332 master-1 kubenswrapper[4740]: I1014 13:08:38.291404 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5c6d48559d-v4vd9"] Oct 14 13:08:38.293332 master-1 kubenswrapper[4740]: E1014 13:08:38.291703 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" podUID="5fd32983-7bea-471a-b6a6-36c25603a68c" Oct 14 13:08:38.671458 master-1 kubenswrapper[4740]: I1014 13:08:38.670646 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:38.682819 master-1 kubenswrapper[4740]: I1014 13:08:38.682741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.789837 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-image-import-ca\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.789902 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-serving-ca\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.789939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-client\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.789966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-encryption-config\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.790007 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwbvg\" (UniqueName: \"kubernetes.io/projected/5fd32983-7bea-471a-b6a6-36c25603a68c-kube-api-access-nwbvg\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.790037 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-config\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.790127 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-serving-cert\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.790153 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-node-pullsecrets\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.790005 master-1 kubenswrapper[4740]: I1014 13:08:38.790177 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-trusted-ca-bundle\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.791401 master-1 kubenswrapper[4740]: I1014 13:08:38.790200 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-audit-dir\") pod \"5fd32983-7bea-471a-b6a6-36c25603a68c\" (UID: \"5fd32983-7bea-471a-b6a6-36c25603a68c\") " Oct 14 13:08:38.791401 master-1 kubenswrapper[4740]: I1014 13:08:38.790735 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:08:38.791401 master-1 kubenswrapper[4740]: I1014 13:08:38.791334 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:08:38.791840 master-1 kubenswrapper[4740]: I1014 13:08:38.791787 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:08:38.793422 master-1 kubenswrapper[4740]: I1014 13:08:38.792504 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:08:38.793422 master-1 kubenswrapper[4740]: I1014 13:08:38.793377 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-config" (OuterVolumeSpecName: "config") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:08:38.793745 master-1 kubenswrapper[4740]: I1014 13:08:38.793650 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:08:38.798108 master-1 kubenswrapper[4740]: I1014 13:08:38.798011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd32983-7bea-471a-b6a6-36c25603a68c-kube-api-access-nwbvg" (OuterVolumeSpecName: "kube-api-access-nwbvg") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "kube-api-access-nwbvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:08:38.798588 master-1 kubenswrapper[4740]: I1014 13:08:38.798478 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:08:38.802695 master-1 kubenswrapper[4740]: I1014 13:08:38.802619 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:08:38.815325 master-1 kubenswrapper[4740]: I1014 13:08:38.815194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fd32983-7bea-471a-b6a6-36c25603a68c" (UID: "5fd32983-7bea-471a-b6a6-36c25603a68c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892089 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892128 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892144 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892159 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwbvg\" (UniqueName: \"kubernetes.io/projected/5fd32983-7bea-471a-b6a6-36c25603a68c-kube-api-access-nwbvg\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892174 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892187 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd32983-7bea-471a-b6a6-36c25603a68c-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892199 4740 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892182 master-1 kubenswrapper[4740]: I1014 13:08:38.892211 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892769 master-1 kubenswrapper[4740]: I1014 13:08:38.892238 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5fd32983-7bea-471a-b6a6-36c25603a68c-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:38.892769 master-1 kubenswrapper[4740]: I1014 13:08:38.892253 4740 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:39.674216 master-1 kubenswrapper[4740]: I1014 13:08:39.674154 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5c6d48559d-v4vd9" Oct 14 13:08:39.729371 master-1 kubenswrapper[4740]: I1014 13:08:39.729277 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6576f6bc9d-xfzjr"] Oct 14 13:08:39.736319 master-1 kubenswrapper[4740]: I1014 13:08:39.732191 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.736319 master-1 kubenswrapper[4740]: I1014 13:08:39.735150 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5c6d48559d-v4vd9"] Oct 14 13:08:39.737824 master-1 kubenswrapper[4740]: I1014 13:08:39.737796 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-5c6d48559d-v4vd9"] Oct 14 13:08:39.761161 master-1 kubenswrapper[4740]: I1014 13:08:39.761124 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 13:08:39.761576 master-1 kubenswrapper[4740]: I1014 13:08:39.761547 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 13:08:39.761737 master-1 kubenswrapper[4740]: I1014 13:08:39.761695 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 13:08:39.762014 master-1 kubenswrapper[4740]: I1014 13:08:39.761708 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 13:08:39.762062 master-1 kubenswrapper[4740]: I1014 13:08:39.761746 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 13:08:39.762164 master-1 kubenswrapper[4740]: I1014 13:08:39.762138 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 13:08:39.762308 master-1 kubenswrapper[4740]: I1014 13:08:39.762284 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 13:08:39.762415 master-1 kubenswrapper[4740]: I1014 13:08:39.762400 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 13:08:39.763672 master-1 kubenswrapper[4740]: I1014 13:08:39.763642 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 13:08:39.764391 master-1 kubenswrapper[4740]: I1014 13:08:39.764351 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6576f6bc9d-xfzjr"] Oct 14 13:08:39.767872 master-1 kubenswrapper[4740]: I1014 13:08:39.767838 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 13:08:39.803433 master-1 kubenswrapper[4740]: I1014 13:08:39.803290 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9svb\" (UniqueName: \"kubernetes.io/projected/ed68870d-0f75-4bac-8f5e-36016becfd08-kube-api-access-l9svb\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803433 master-1 kubenswrapper[4740]: I1014 13:08:39.803358 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-audit\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803433 master-1 kubenswrapper[4740]: I1014 13:08:39.803400 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-serving-cert\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803433 master-1 kubenswrapper[4740]: I1014 13:08:39.803415 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-audit-dir\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803433 master-1 kubenswrapper[4740]: I1014 13:08:39.803461 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-node-pullsecrets\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803477 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-client\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803492 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-config\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803509 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-serving-ca\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-image-import-ca\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803540 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-encryption-config\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803563 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-trusted-ca-bundle\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.803899 master-1 kubenswrapper[4740]: I1014 13:08:39.803620 4740 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5fd32983-7bea-471a-b6a6-36c25603a68c-audit\") on node \"master-1\" DevicePath \"\"" Oct 14 13:08:39.904266 master-1 kubenswrapper[4740]: I1014 13:08:39.904183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-node-pullsecrets\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.904266 master-1 kubenswrapper[4740]: I1014 13:08:39.904280 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-client\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905356 master-1 kubenswrapper[4740]: I1014 13:08:39.904310 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-config\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905356 master-1 kubenswrapper[4740]: I1014 13:08:39.904331 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-serving-ca\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905356 master-1 kubenswrapper[4740]: I1014 13:08:39.904346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-image-import-ca\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905356 master-1 kubenswrapper[4740]: I1014 13:08:39.904348 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-node-pullsecrets\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905356 master-1 kubenswrapper[4740]: I1014 13:08:39.905344 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-serving-ca\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905655 master-1 kubenswrapper[4740]: I1014 13:08:39.904365 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-encryption-config\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905655 master-1 kubenswrapper[4740]: I1014 13:08:39.905506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-trusted-ca-bundle\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905836 master-1 kubenswrapper[4740]: I1014 13:08:39.905760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9svb\" (UniqueName: \"kubernetes.io/projected/ed68870d-0f75-4bac-8f5e-36016becfd08-kube-api-access-l9svb\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.905891 master-1 kubenswrapper[4740]: I1014 13:08:39.905842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-audit\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.906003 master-1 kubenswrapper[4740]: I1014 13:08:39.905968 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-serving-cert\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.906056 master-1 kubenswrapper[4740]: I1014 13:08:39.906005 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-audit-dir\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.906190 master-1 kubenswrapper[4740]: I1014 13:08:39.906155 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-audit-dir\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.906328 master-1 kubenswrapper[4740]: I1014 13:08:39.905781 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-config\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.906962 master-1 kubenswrapper[4740]: I1014 13:08:39.906908 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-image-import-ca\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.907210 master-1 kubenswrapper[4740]: I1014 13:08:39.907164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-audit\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.907357 master-1 kubenswrapper[4740]: I1014 13:08:39.907251 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-trusted-ca-bundle\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.909842 master-1 kubenswrapper[4740]: I1014 13:08:39.909779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-encryption-config\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.910745 master-1 kubenswrapper[4740]: I1014 13:08:39.910694 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-client\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.911735 master-1 kubenswrapper[4740]: I1014 13:08:39.911699 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-serving-cert\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:39.924559 master-1 kubenswrapper[4740]: I1014 13:08:39.924389 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9svb\" (UniqueName: \"kubernetes.io/projected/ed68870d-0f75-4bac-8f5e-36016becfd08-kube-api-access-l9svb\") pod \"apiserver-6576f6bc9d-xfzjr\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:40.076472 master-1 kubenswrapper[4740]: I1014 13:08:40.076396 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:40.952456 master-1 kubenswrapper[4740]: I1014 13:08:40.952357 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd32983-7bea-471a-b6a6-36c25603a68c" path="/var/lib/kubelet/pods/5fd32983-7bea-471a-b6a6-36c25603a68c/volumes" Oct 14 13:08:41.323939 master-1 kubenswrapper[4740]: I1014 13:08:41.323855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:41.324210 master-1 kubenswrapper[4740]: I1014 13:08:41.324063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:41.324210 master-1 kubenswrapper[4740]: E1014 13:08:41.324113 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:08:49.324075893 +0000 UTC m=+155.134365262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:41.329949 master-1 kubenswrapper[4740]: I1014 13:08:41.329872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-catalogserver-certs\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:41.426261 master-1 kubenswrapper[4740]: I1014 13:08:41.426120 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:41.426556 master-1 kubenswrapper[4740]: E1014 13:08:41.426354 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:08:49.426320338 +0000 UTC m=+155.236609697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:42.830083 master-1 kubenswrapper[4740]: I1014 13:08:42.829899 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 14 13:08:42.830919 master-1 kubenswrapper[4740]: I1014 13:08:42.830873 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:42.834774 master-1 kubenswrapper[4740]: I1014 13:08:42.834723 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 14 13:08:42.838756 master-1 kubenswrapper[4740]: I1014 13:08:42.838730 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 14 13:08:42.944174 master-1 kubenswrapper[4740]: I1014 13:08:42.944132 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-var-lock\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:42.944370 master-1 kubenswrapper[4740]: I1014 13:08:42.944314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:42.944403 master-1 kubenswrapper[4740]: I1014 13:08:42.944368 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.045273 master-1 kubenswrapper[4740]: I1014 13:08:43.044996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.045273 master-1 kubenswrapper[4740]: I1014 13:08:43.045054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.045273 master-1 kubenswrapper[4740]: I1014 13:08:43.045092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-var-lock\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.045535 master-1 kubenswrapper[4740]: I1014 13:08:43.045395 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.045535 master-1 kubenswrapper[4740]: I1014 13:08:43.045408 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-var-lock\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.074353 master-1 kubenswrapper[4740]: I1014 13:08:43.074297 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kube-api-access\") pod \"installer-1-master-1\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:43.151202 master-1 kubenswrapper[4740]: I1014 13:08:43.151045 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:08:44.184806 master-1 kubenswrapper[4740]: I1014 13:08:44.184641 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 14 13:08:44.185679 master-1 kubenswrapper[4740]: I1014 13:08:44.185628 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.189825 master-1 kubenswrapper[4740]: I1014 13:08:44.189717 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Oct 14 13:08:44.201558 master-1 kubenswrapper[4740]: I1014 13:08:44.201183 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 14 13:08:44.259259 master-1 kubenswrapper[4740]: I1014 13:08:44.259177 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.259467 master-1 kubenswrapper[4740]: I1014 13:08:44.259346 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-var-lock\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.259467 master-1 kubenswrapper[4740]: I1014 13:08:44.259390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b61b7a8e-e2be-4f11-a659-1919213dda51-kube-api-access\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.360975 master-1 kubenswrapper[4740]: I1014 13:08:44.360858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b61b7a8e-e2be-4f11-a659-1919213dda51-kube-api-access\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.361338 master-1 kubenswrapper[4740]: I1014 13:08:44.361168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.361338 master-1 kubenswrapper[4740]: I1014 13:08:44.361286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-var-lock\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.361338 master-1 kubenswrapper[4740]: I1014 13:08:44.361318 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.361553 master-1 kubenswrapper[4740]: I1014 13:08:44.361386 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-var-lock\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.383676 master-1 kubenswrapper[4740]: I1014 13:08:44.383601 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b61b7a8e-e2be-4f11-a659-1919213dda51-kube-api-access\") pod \"installer-1-master-1\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.540827 master-1 kubenswrapper[4740]: I1014 13:08:44.540758 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 14 13:08:44.868643 master-1 kubenswrapper[4740]: I1014 13:08:44.868253 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:44.868643 master-1 kubenswrapper[4740]: I1014 13:08:44.868309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:44.868643 master-1 kubenswrapper[4740]: E1014 13:08:44.868476 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:44.868643 master-1 kubenswrapper[4740]: E1014 13:08:44.868549 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:16.868521999 +0000 UTC m=+182.678811318 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:08:44.874519 master-1 kubenswrapper[4740]: I1014 13:08:44.874415 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:08:46.335507 master-1 kubenswrapper[4740]: I1014 13:08:46.334564 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6576f6bc9d-xfzjr"] Oct 14 13:08:46.349616 master-1 kubenswrapper[4740]: I1014 13:08:46.349518 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 14 13:08:46.404497 master-1 kubenswrapper[4740]: I1014 13:08:46.403677 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 14 13:08:46.674331 master-1 kubenswrapper[4740]: I1014 13:08:46.672243 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-h7z5t"] Oct 14 13:08:46.675291 master-1 kubenswrapper[4740]: I1014 13:08:46.675222 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.705491 master-1 kubenswrapper[4740]: I1014 13:08:46.705429 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" event={"ID":"b1a35e1e-333f-480c-b1d6-059475700627","Type":"ContainerStarted","Data":"27a4cbad3b079767e9b919d9bf0cb209bf6f3cb106ef58feb6d99ee68f84176d"} Oct 14 13:08:46.716036 master-1 kubenswrapper[4740]: I1014 13:08:46.715979 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"b1c6b650-cfb9-4098-8d7b-43e9735daa7e","Type":"ContainerStarted","Data":"d4d976ea506873910ec98617359e213fac97d298e1d48f1c567934a8120e8b4e"} Oct 14 13:08:46.722898 master-1 kubenswrapper[4740]: I1014 13:08:46.722658 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" event={"ID":"b51ef0bc-8b0e-4fab-b101-660ed408924c","Type":"ContainerStarted","Data":"a556e6070755c07b6ba02908f1ccb8ff8f88268a8238f6f0a43befef6f1a7d40"} Oct 14 13:08:46.724894 master-1 kubenswrapper[4740]: I1014 13:08:46.724843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerStarted","Data":"93ddaa8fe6e274708d0091bef2fb9138644bb69f5d5f9f20951b96f0721d9dea"} Oct 14 13:08:46.724894 master-1 kubenswrapper[4740]: I1014 13:08:46.724883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerStarted","Data":"8c02147a25c6590fc2f39f47ab7a6cfafc0656844334bfba1f068b3fe5d01610"} Oct 14 13:08:46.727781 master-1 kubenswrapper[4740]: I1014 13:08:46.727709 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerStarted","Data":"2b3581889f1f846473a9dd583060d70caa3514018ccfe65e18619f5e6369bcf8"} Oct 14 13:08:46.729452 master-1 kubenswrapper[4740]: I1014 13:08:46.729410 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" event={"ID":"910af03d-893a-443d-b6ed-fe21c26951f4","Type":"ContainerStarted","Data":"fb6c5420a8f436ff4f5d27faa23a57d82938fb6945c842f7640e172ddc7508c8"} Oct 14 13:08:46.732591 master-1 kubenswrapper[4740]: I1014 13:08:46.732544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" event={"ID":"1d68f537-be68-4623-bded-e5d7fb5c3573","Type":"ContainerStarted","Data":"608ed90a6bca3b38940087a8963029669578b61c9eedde5d8fd727413623690a"} Oct 14 13:08:46.738976 master-1 kubenswrapper[4740]: I1014 13:08:46.738920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" event={"ID":"1fa31cdd-e051-4987-a1a2-814fc7445e6b","Type":"ContainerStarted","Data":"0b962a252bf8ab6c63246ca128554b0fe5af8f63ec0570ab52335d2fc4711118"} Oct 14 13:08:46.740471 master-1 kubenswrapper[4740]: I1014 13:08:46.740408 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" event={"ID":"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1","Type":"ContainerStarted","Data":"55219eb61bfd5e2f828ccab87afdf9e07a997c9fa1449a443d4f0cfd5047f860"} Oct 14 13:08:46.740538 master-1 kubenswrapper[4740]: I1014 13:08:46.740478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" event={"ID":"bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1","Type":"ContainerStarted","Data":"d25a4567addb02a84c270284a091076558b3da4d326b20a838dd598a3680e338"} Oct 14 13:08:46.745836 master-1 kubenswrapper[4740]: I1014 13:08:46.745728 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" event={"ID":"ab511c1d-28e3-448a-86ec-cea21871fd26","Type":"ContainerStarted","Data":"8cd4a4d731e79bc30290513681d3cbdcd60cae61ebad95f337bb0c00a657d2b1"} Oct 14 13:08:46.751081 master-1 kubenswrapper[4740]: I1014 13:08:46.750992 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw" podStartSLOduration=125.543552244 podStartE2EDuration="2m19.750967039s" podCreationTimestamp="2025-10-14 13:06:27 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.976783242 +0000 UTC m=+137.787072591" lastFinishedPulling="2025-10-14 13:08:46.184198057 +0000 UTC m=+151.994487386" observedRunningTime="2025-10-14 13:08:46.736870049 +0000 UTC m=+152.547159378" watchObservedRunningTime="2025-10-14 13:08:46.750967039 +0000 UTC m=+152.561256378" Oct 14 13:08:46.753068 master-1 kubenswrapper[4740]: I1014 13:08:46.752779 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" podStartSLOduration=126.687193984 podStartE2EDuration="2m20.752773489s" podCreationTimestamp="2025-10-14 13:06:26 +0000 UTC" firstStartedPulling="2025-10-14 13:08:32.013738023 +0000 UTC m=+137.824027342" lastFinishedPulling="2025-10-14 13:08:46.079317478 +0000 UTC m=+151.889606847" observedRunningTime="2025-10-14 13:08:46.750809175 +0000 UTC m=+152.561098504" watchObservedRunningTime="2025-10-14 13:08:46.752773489 +0000 UTC m=+152.563062828" Oct 14 13:08:46.754588 master-1 kubenswrapper[4740]: I1014 13:08:46.754539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" event={"ID":"655ad1ce-582a-4728-8bfd-ca4164468de3","Type":"ContainerStarted","Data":"db19b4e0a052b1c69547c42c5ad6d688dc0ea0b6416e827b85194ada15dd8b8b"} Oct 14 13:08:46.758991 master-1 kubenswrapper[4740]: I1014 13:08:46.758964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" event={"ID":"a4ab71e1-9b1f-42ee-8abb-8f998e3cae74","Type":"ContainerStarted","Data":"bd997d2efab21c250b0bca6703f9acaadef09e8543bc62d788b37650a75865a6"} Oct 14 13:08:46.763655 master-1 kubenswrapper[4740]: I1014 13:08:46.763600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"b61b7a8e-e2be-4f11-a659-1919213dda51","Type":"ContainerStarted","Data":"83cc22825e56988eb9e23b29a138bc79b0bfe6feac31dee8186d5737473dd1cf"} Oct 14 13:08:46.764420 master-1 kubenswrapper[4740]: I1014 13:08:46.764303 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g" podStartSLOduration=105.549133578 podStartE2EDuration="1m59.764270476s" podCreationTimestamp="2025-10-14 13:06:47 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.888529143 +0000 UTC m=+137.698818492" lastFinishedPulling="2025-10-14 13:08:46.103666021 +0000 UTC m=+151.913955390" observedRunningTime="2025-10-14 13:08:46.761901131 +0000 UTC m=+152.572190460" watchObservedRunningTime="2025-10-14 13:08:46.764270476 +0000 UTC m=+152.574559795" Oct 14 13:08:46.798863 master-1 kubenswrapper[4740]: I1014 13:08:46.798530 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d" podStartSLOduration=130.672714044 podStartE2EDuration="2m24.798505853s" podCreationTimestamp="2025-10-14 13:06:22 +0000 UTC" firstStartedPulling="2025-10-14 13:08:32.098506766 +0000 UTC m=+137.908796135" lastFinishedPulling="2025-10-14 13:08:46.224298615 +0000 UTC m=+152.034587944" observedRunningTime="2025-10-14 13:08:46.798377789 +0000 UTC m=+152.608667128" watchObservedRunningTime="2025-10-14 13:08:46.798505853 +0000 UTC m=+152.608795202" Oct 14 13:08:46.799421 master-1 kubenswrapper[4740]: I1014 13:08:46.799359 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-9dbb96f7-s66vj" podStartSLOduration=122.609542357 podStartE2EDuration="2m16.799351435s" podCreationTimestamp="2025-10-14 13:06:30 +0000 UTC" firstStartedPulling="2025-10-14 13:08:32.077872446 +0000 UTC m=+137.888161775" lastFinishedPulling="2025-10-14 13:08:46.267681534 +0000 UTC m=+152.077970853" observedRunningTime="2025-10-14 13:08:46.776603217 +0000 UTC m=+152.586892546" watchObservedRunningTime="2025-10-14 13:08:46.799351435 +0000 UTC m=+152.609640774" Oct 14 13:08:46.812342 master-1 kubenswrapper[4740]: I1014 13:08:46.812300 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysconfig\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812477 master-1 kubenswrapper[4740]: I1014 13:08:46.812343 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-lib-modules\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812477 master-1 kubenswrapper[4740]: I1014 13:08:46.812376 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-sys\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812477 master-1 kubenswrapper[4740]: I1014 13:08:46.812405 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-host\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812628 master-1 kubenswrapper[4740]: I1014 13:08:46.812586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-modprobe-d\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812810 master-1 kubenswrapper[4740]: I1014 13:08:46.812780 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-var-lib-kubelet\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812892 master-1 kubenswrapper[4740]: I1014 13:08:46.812870 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/370e22bb-5fff-437c-a6db-1425c2e238e3-tmp\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812929 master-1 kubenswrapper[4740]: I1014 13:08:46.812901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-tuned\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.812989 master-1 kubenswrapper[4740]: I1014 13:08:46.812975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-run\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.813029 master-1 kubenswrapper[4740]: I1014 13:08:46.813001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-kubernetes\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.813056 master-1 kubenswrapper[4740]: I1014 13:08:46.813033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysctl-d\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.813086 master-1 kubenswrapper[4740]: I1014 13:08:46.813054 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysctl-conf\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.813114 master-1 kubenswrapper[4740]: I1014 13:08:46.813089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vzg\" (UniqueName: \"kubernetes.io/projected/370e22bb-5fff-437c-a6db-1425c2e238e3-kube-api-access-h2vzg\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.813174 master-1 kubenswrapper[4740]: I1014 13:08:46.813158 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-systemd\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.815705 master-1 kubenswrapper[4740]: I1014 13:08:46.815655 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk" podStartSLOduration=128.643957108 podStartE2EDuration="2m22.815643506s" podCreationTimestamp="2025-10-14 13:06:24 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.963114924 +0000 UTC m=+137.773404263" lastFinishedPulling="2025-10-14 13:08:46.134801342 +0000 UTC m=+151.945090661" observedRunningTime="2025-10-14 13:08:46.813388513 +0000 UTC m=+152.623677852" watchObservedRunningTime="2025-10-14 13:08:46.815643506 +0000 UTC m=+152.625932835" Oct 14 13:08:46.832500 master-1 kubenswrapper[4740]: I1014 13:08:46.831849 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md" podStartSLOduration=124.57815892 podStartE2EDuration="2m18.831809342s" podCreationTimestamp="2025-10-14 13:06:28 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.825565884 +0000 UTC m=+137.635855213" lastFinishedPulling="2025-10-14 13:08:46.079216316 +0000 UTC m=+151.889505635" observedRunningTime="2025-10-14 13:08:46.831087483 +0000 UTC m=+152.641376812" watchObservedRunningTime="2025-10-14 13:08:46.831809342 +0000 UTC m=+152.642098671" Oct 14 13:08:46.844363 master-1 kubenswrapper[4740]: I1014 13:08:46.842814 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj" podStartSLOduration=128.677188757 podStartE2EDuration="2m22.842794936s" podCreationTimestamp="2025-10-14 13:06:24 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.969215683 +0000 UTC m=+137.779505022" lastFinishedPulling="2025-10-14 13:08:46.134821872 +0000 UTC m=+151.945111201" observedRunningTime="2025-10-14 13:08:46.842613881 +0000 UTC m=+152.652903230" watchObservedRunningTime="2025-10-14 13:08:46.842794936 +0000 UTC m=+152.653084265" Oct 14 13:08:46.861285 master-1 kubenswrapper[4740]: I1014 13:08:46.855716 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh" podStartSLOduration=127.014951828 podStartE2EDuration="2m18.855696353s" podCreationTimestamp="2025-10-14 13:06:28 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.900666348 +0000 UTC m=+137.710955677" lastFinishedPulling="2025-10-14 13:08:43.741410873 +0000 UTC m=+149.551700202" observedRunningTime="2025-10-14 13:08:46.854210611 +0000 UTC m=+152.664499940" watchObservedRunningTime="2025-10-14 13:08:46.855696353 +0000 UTC m=+152.665985672" Oct 14 13:08:46.916021 master-1 kubenswrapper[4740]: I1014 13:08:46.915943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-systemd\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.916357 master-1 kubenswrapper[4740]: I1014 13:08:46.916060 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysconfig\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.916357 master-1 kubenswrapper[4740]: I1014 13:08:46.916082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-systemd\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.916357 master-1 kubenswrapper[4740]: I1014 13:08:46.916282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysconfig\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.917124 master-1 kubenswrapper[4740]: I1014 13:08:46.917104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-lib-modules\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.918579 master-1 kubenswrapper[4740]: I1014 13:08:46.918430 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-lib-modules\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.918705 master-1 kubenswrapper[4740]: I1014 13:08:46.918537 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-sys\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.918749 master-1 kubenswrapper[4740]: I1014 13:08:46.918573 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-sys\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.918749 master-1 kubenswrapper[4740]: I1014 13:08:46.918733 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-host\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.918879 master-1 kubenswrapper[4740]: I1014 13:08:46.918856 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-host\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919094 master-1 kubenswrapper[4740]: I1014 13:08:46.918946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-modprobe-d\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919094 master-1 kubenswrapper[4740]: I1014 13:08:46.919051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-var-lib-kubelet\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919094 master-1 kubenswrapper[4740]: I1014 13:08:46.919087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-var-lib-kubelet\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919289 master-1 kubenswrapper[4740]: I1014 13:08:46.919109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/370e22bb-5fff-437c-a6db-1425c2e238e3-tmp\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919289 master-1 kubenswrapper[4740]: I1014 13:08:46.919134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-tuned\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919289 master-1 kubenswrapper[4740]: I1014 13:08:46.919147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-modprobe-d\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919289 master-1 kubenswrapper[4740]: I1014 13:08:46.919253 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-run\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919399 master-1 kubenswrapper[4740]: I1014 13:08:46.919303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-kubernetes\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919399 master-1 kubenswrapper[4740]: I1014 13:08:46.919363 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysctl-d\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919399 master-1 kubenswrapper[4740]: I1014 13:08:46.919387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysctl-conf\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919481 master-1 kubenswrapper[4740]: I1014 13:08:46.919417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vzg\" (UniqueName: \"kubernetes.io/projected/370e22bb-5fff-437c-a6db-1425c2e238e3-kube-api-access-h2vzg\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.919510 master-1 kubenswrapper[4740]: I1014 13:08:46.919483 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-run\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.920007 master-1 kubenswrapper[4740]: I1014 13:08:46.919534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysctl-d\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.920113 master-1 kubenswrapper[4740]: I1014 13:08:46.920079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-kubernetes\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.920297 master-1 kubenswrapper[4740]: I1014 13:08:46.920185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-sysctl-conf\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.924910 master-1 kubenswrapper[4740]: I1014 13:08:46.924863 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/370e22bb-5fff-437c-a6db-1425c2e238e3-tmp\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.925563 master-1 kubenswrapper[4740]: I1014 13:08:46.925503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/370e22bb-5fff-437c-a6db-1425c2e238e3-etc-tuned\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:46.941658 master-1 kubenswrapper[4740]: I1014 13:08:46.939151 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vzg\" (UniqueName: \"kubernetes.io/projected/370e22bb-5fff-437c-a6db-1425c2e238e3-kube-api-access-h2vzg\") pod \"tuned-h7z5t\" (UID: \"370e22bb-5fff-437c-a6db-1425c2e238e3\") " pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:47.017272 master-1 kubenswrapper[4740]: I1014 13:08:47.016823 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" Oct 14 13:08:47.174644 master-1 kubenswrapper[4740]: I1014 13:08:47.174594 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zbv7v"] Oct 14 13:08:47.175432 master-1 kubenswrapper[4740]: I1014 13:08:47.175406 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.177706 master-1 kubenswrapper[4740]: I1014 13:08:47.177665 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 14 13:08:47.177873 master-1 kubenswrapper[4740]: I1014 13:08:47.177823 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 14 13:08:47.178000 master-1 kubenswrapper[4740]: I1014 13:08:47.177951 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 14 13:08:47.178037 master-1 kubenswrapper[4740]: I1014 13:08:47.177997 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 14 13:08:47.184962 master-1 kubenswrapper[4740]: I1014 13:08:47.183784 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zbv7v"] Oct 14 13:08:47.325899 master-1 kubenswrapper[4740]: I1014 13:08:47.325822 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f553d2c5-b9fb-49b5-baac-00d3384d6478-config-volume\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.326198 master-1 kubenswrapper[4740]: I1014 13:08:47.325936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvzq\" (UniqueName: \"kubernetes.io/projected/f553d2c5-b9fb-49b5-baac-00d3384d6478-kube-api-access-5kvzq\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.326198 master-1 kubenswrapper[4740]: I1014 13:08:47.325979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f553d2c5-b9fb-49b5-baac-00d3384d6478-metrics-tls\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.427373 master-1 kubenswrapper[4740]: I1014 13:08:47.427212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f553d2c5-b9fb-49b5-baac-00d3384d6478-metrics-tls\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.428604 master-1 kubenswrapper[4740]: I1014 13:08:47.427460 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f553d2c5-b9fb-49b5-baac-00d3384d6478-config-volume\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.428604 master-1 kubenswrapper[4740]: I1014 13:08:47.427554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvzq\" (UniqueName: \"kubernetes.io/projected/f553d2c5-b9fb-49b5-baac-00d3384d6478-kube-api-access-5kvzq\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.428911 master-1 kubenswrapper[4740]: I1014 13:08:47.428870 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f553d2c5-b9fb-49b5-baac-00d3384d6478-config-volume\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.439277 master-1 kubenswrapper[4740]: I1014 13:08:47.434069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f553d2c5-b9fb-49b5-baac-00d3384d6478-metrics-tls\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.455187 master-1 kubenswrapper[4740]: I1014 13:08:47.455120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvzq\" (UniqueName: \"kubernetes.io/projected/f553d2c5-b9fb-49b5-baac-00d3384d6478-kube-api-access-5kvzq\") pod \"dns-default-zbv7v\" (UID: \"f553d2c5-b9fb-49b5-baac-00d3384d6478\") " pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.489629 master-1 kubenswrapper[4740]: I1014 13:08:47.489534 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:47.498569 master-1 kubenswrapper[4740]: I1014 13:08:47.498178 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-lhshc"] Oct 14 13:08:47.499047 master-1 kubenswrapper[4740]: I1014 13:08:47.499010 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.636156 master-1 kubenswrapper[4740]: I1014 13:08:47.635738 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dc3c6b11-2798-41ca-8a29-2f4c99b0fa68-hosts-file\") pod \"node-resolver-lhshc\" (UID: \"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68\") " pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.636740 master-1 kubenswrapper[4740]: I1014 13:08:47.636702 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9gr\" (UniqueName: \"kubernetes.io/projected/dc3c6b11-2798-41ca-8a29-2f4c99b0fa68-kube-api-access-kx9gr\") pod \"node-resolver-lhshc\" (UID: \"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68\") " pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.721433 master-1 kubenswrapper[4740]: I1014 13:08:47.721309 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zbv7v"] Oct 14 13:08:47.738262 master-1 kubenswrapper[4740]: I1014 13:08:47.737881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dc3c6b11-2798-41ca-8a29-2f4c99b0fa68-hosts-file\") pod \"node-resolver-lhshc\" (UID: \"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68\") " pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.738262 master-1 kubenswrapper[4740]: I1014 13:08:47.737986 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9gr\" (UniqueName: \"kubernetes.io/projected/dc3c6b11-2798-41ca-8a29-2f4c99b0fa68-kube-api-access-kx9gr\") pod \"node-resolver-lhshc\" (UID: \"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68\") " pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.738262 master-1 kubenswrapper[4740]: I1014 13:08:47.738061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dc3c6b11-2798-41ca-8a29-2f4c99b0fa68-hosts-file\") pod \"node-resolver-lhshc\" (UID: \"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68\") " pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.755780 master-1 kubenswrapper[4740]: I1014 13:08:47.755753 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9gr\" (UniqueName: \"kubernetes.io/projected/dc3c6b11-2798-41ca-8a29-2f4c99b0fa68-kube-api-access-kx9gr\") pod \"node-resolver-lhshc\" (UID: \"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68\") " pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.771885 master-1 kubenswrapper[4740]: I1014 13:08:47.770911 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" event={"ID":"370e22bb-5fff-437c-a6db-1425c2e238e3","Type":"ContainerStarted","Data":"8dee5499a0dc9699b8bfd8482bf52d2c77601d1919429a5c45f74e15d3615669"} Oct 14 13:08:47.771885 master-1 kubenswrapper[4740]: I1014 13:08:47.771312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" event={"ID":"370e22bb-5fff-437c-a6db-1425c2e238e3","Type":"ContainerStarted","Data":"cf088fa31263d4b8a19cc16771aa70028d66ffed0f45d2486f5c6a5f18ecd116"} Oct 14 13:08:47.774010 master-1 kubenswrapper[4740]: I1014 13:08:47.773497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" event={"ID":"910af03d-893a-443d-b6ed-fe21c26951f4","Type":"ContainerStarted","Data":"233052daf024d65865064d5f3719aa018f927fc114b861d29080d8a50fecdc06"} Oct 14 13:08:47.776806 master-1 kubenswrapper[4740]: I1014 13:08:47.776383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"b61b7a8e-e2be-4f11-a659-1919213dda51","Type":"ContainerStarted","Data":"9f41636be726016072c28ea80b0c3486ab89141361a1377e8eeffd48959d0e15"} Oct 14 13:08:47.778824 master-1 kubenswrapper[4740]: I1014 13:08:47.778376 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"b1c6b650-cfb9-4098-8d7b-43e9735daa7e","Type":"ContainerStarted","Data":"8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3"} Oct 14 13:08:47.780570 master-1 kubenswrapper[4740]: I1014 13:08:47.780091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zbv7v" event={"ID":"f553d2c5-b9fb-49b5-baac-00d3384d6478","Type":"ContainerStarted","Data":"8e6de5acb57ee1c986702f7a28f50345da1f163a3ba7e6688cc911b6bc30dd7a"} Oct 14 13:08:47.785904 master-1 kubenswrapper[4740]: I1014 13:08:47.785837 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-h7z5t" podStartSLOduration=1.785801405 podStartE2EDuration="1.785801405s" podCreationTimestamp="2025-10-14 13:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:47.784775487 +0000 UTC m=+153.595064826" watchObservedRunningTime="2025-10-14 13:08:47.785801405 +0000 UTC m=+153.596090734" Oct 14 13:08:47.797958 master-1 kubenswrapper[4740]: I1014 13:08:47.797889 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-7769d9677-nh2qc" podStartSLOduration=100.634335716 podStartE2EDuration="1m54.797875178s" podCreationTimestamp="2025-10-14 13:06:53 +0000 UTC" firstStartedPulling="2025-10-14 13:08:31.939729218 +0000 UTC m=+137.750018547" lastFinishedPulling="2025-10-14 13:08:46.10326868 +0000 UTC m=+151.913558009" observedRunningTime="2025-10-14 13:08:47.796673516 +0000 UTC m=+153.606962845" watchObservedRunningTime="2025-10-14 13:08:47.797875178 +0000 UTC m=+153.608164507" Oct 14 13:08:47.826818 master-1 kubenswrapper[4740]: I1014 13:08:47.826665 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-1" podStartSLOduration=3.826638154 podStartE2EDuration="3.826638154s" podCreationTimestamp="2025-10-14 13:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:47.823445986 +0000 UTC m=+153.633735305" watchObservedRunningTime="2025-10-14 13:08:47.826638154 +0000 UTC m=+153.636927523" Oct 14 13:08:47.827336 master-1 kubenswrapper[4740]: I1014 13:08:47.827271 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-1" podStartSLOduration=5.827214089 podStartE2EDuration="5.827214089s" podCreationTimestamp="2025-10-14 13:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:47.811374462 +0000 UTC m=+153.621663811" watchObservedRunningTime="2025-10-14 13:08:47.827214089 +0000 UTC m=+153.637503448" Oct 14 13:08:47.852460 master-1 kubenswrapper[4740]: I1014 13:08:47.852377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lhshc" Oct 14 13:08:47.867974 master-1 kubenswrapper[4740]: W1014 13:08:47.867900 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc3c6b11_2798_41ca_8a29_2f4c99b0fa68.slice/crio-d4bf2e091c4824d9d15f98b4a9c60ac816d861f1e306e739284fff7becd07a47 WatchSource:0}: Error finding container d4bf2e091c4824d9d15f98b4a9c60ac816d861f1e306e739284fff7becd07a47: Status 404 returned error can't find the container with id d4bf2e091c4824d9d15f98b4a9c60ac816d861f1e306e739284fff7becd07a47 Oct 14 13:08:48.449115 master-1 kubenswrapper[4740]: I1014 13:08:48.448770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:08:48.449963 master-1 kubenswrapper[4740]: E1014 13:08:48.448978 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:08:48.449963 master-1 kubenswrapper[4740]: E1014 13:08:48.449261 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:20.449241849 +0000 UTC m=+186.259531178 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:08:48.661820 master-1 kubenswrapper[4740]: I1014 13:08:48.660344 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-c57444595-zs4m8"] Oct 14 13:08:48.661820 master-1 kubenswrapper[4740]: I1014 13:08:48.661142 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.664040 master-1 kubenswrapper[4740]: I1014 13:08:48.663950 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 13:08:48.665148 master-1 kubenswrapper[4740]: I1014 13:08:48.665094 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 13:08:48.665566 master-1 kubenswrapper[4740]: I1014 13:08:48.665531 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 13:08:48.665790 master-1 kubenswrapper[4740]: I1014 13:08:48.665762 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 13:08:48.665972 master-1 kubenswrapper[4740]: I1014 13:08:48.665945 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 13:08:48.666113 master-1 kubenswrapper[4740]: I1014 13:08:48.666086 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 13:08:48.666222 master-1 kubenswrapper[4740]: I1014 13:08:48.666206 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 13:08:48.666462 master-1 kubenswrapper[4740]: I1014 13:08:48.666434 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 13:08:48.667997 master-1 kubenswrapper[4740]: I1014 13:08:48.667955 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-c57444595-zs4m8"] Oct 14 13:08:48.791395 master-1 kubenswrapper[4740]: I1014 13:08:48.791326 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lhshc" event={"ID":"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68","Type":"ContainerStarted","Data":"7d6de97a23de75e84ca8c98da80eed2530a8189f85648c426e8d6e63bc4efa26"} Oct 14 13:08:48.791395 master-1 kubenswrapper[4740]: I1014 13:08:48.791374 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lhshc" event={"ID":"dc3c6b11-2798-41ca-8a29-2f4c99b0fa68","Type":"ContainerStarted","Data":"d4bf2e091c4824d9d15f98b4a9c60ac816d861f1e306e739284fff7becd07a47"} Oct 14 13:08:48.859155 master-1 kubenswrapper[4740]: I1014 13:08:48.859097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-client\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859350 master-1 kubenswrapper[4740]: I1014 13:08:48.859282 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-policies\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859350 master-1 kubenswrapper[4740]: I1014 13:08:48.859331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-trusted-ca-bundle\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859427 master-1 kubenswrapper[4740]: I1014 13:08:48.859352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-serving-cert\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859427 master-1 kubenswrapper[4740]: I1014 13:08:48.859393 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-serving-ca\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859427 master-1 kubenswrapper[4740]: I1014 13:08:48.859420 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-encryption-config\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859519 master-1 kubenswrapper[4740]: I1014 13:08:48.859441 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-dir\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.859557 master-1 kubenswrapper[4740]: I1014 13:08:48.859523 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsfpl\" (UniqueName: \"kubernetes.io/projected/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-kube-api-access-jsfpl\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961105 master-1 kubenswrapper[4740]: I1014 13:08:48.960968 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-policies\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961105 master-1 kubenswrapper[4740]: I1014 13:08:48.961051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-serving-cert\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961105 master-1 kubenswrapper[4740]: I1014 13:08:48.961084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-trusted-ca-bundle\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961553 master-1 kubenswrapper[4740]: I1014 13:08:48.961136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-serving-ca\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961553 master-1 kubenswrapper[4740]: I1014 13:08:48.961162 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-encryption-config\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961553 master-1 kubenswrapper[4740]: I1014 13:08:48.961191 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-dir\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961553 master-1 kubenswrapper[4740]: I1014 13:08:48.961326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsfpl\" (UniqueName: \"kubernetes.io/projected/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-kube-api-access-jsfpl\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.961553 master-1 kubenswrapper[4740]: I1014 13:08:48.961514 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-client\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.962966 master-1 kubenswrapper[4740]: I1014 13:08:48.962299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-dir\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.963511 master-1 kubenswrapper[4740]: I1014 13:08:48.963455 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-policies\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.963511 master-1 kubenswrapper[4740]: I1014 13:08:48.963489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-trusted-ca-bundle\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.963698 master-1 kubenswrapper[4740]: I1014 13:08:48.963527 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-serving-ca\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.966546 master-1 kubenswrapper[4740]: I1014 13:08:48.966475 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-encryption-config\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.967480 master-1 kubenswrapper[4740]: I1014 13:08:48.967409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-serving-cert\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:48.971941 master-1 kubenswrapper[4740]: I1014 13:08:48.971868 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-client\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:49.004168 master-1 kubenswrapper[4740]: I1014 13:08:49.004075 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsfpl\" (UniqueName: \"kubernetes.io/projected/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-kube-api-access-jsfpl\") pod \"apiserver-c57444595-zs4m8\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:49.026772 master-1 kubenswrapper[4740]: I1014 13:08:49.026024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:49.366955 master-1 kubenswrapper[4740]: I1014 13:08:49.366870 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:08:49.367173 master-1 kubenswrapper[4740]: E1014 13:08:49.367016 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:05.366992289 +0000 UTC m=+171.177281618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:49.450675 master-1 kubenswrapper[4740]: I1014 13:08:49.450544 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lhshc" podStartSLOduration=2.450494747 podStartE2EDuration="2.450494747s" podCreationTimestamp="2025-10-14 13:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:48.805901184 +0000 UTC m=+154.616190523" watchObservedRunningTime="2025-10-14 13:08:49.450494747 +0000 UTC m=+155.260784116" Oct 14 13:08:49.453978 master-1 kubenswrapper[4740]: I1014 13:08:49.453921 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-c57444595-zs4m8"] Oct 14 13:08:49.460319 master-1 kubenswrapper[4740]: W1014 13:08:49.460214 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57cd904e_5dfb_4cc1_8bd8_8adf12b276c6.slice/crio-a7a0890d7ffcce8e3f0c608219d432f3f64f3d0bdbc36db56620e1dfeaa9fe81 WatchSource:0}: Error finding container a7a0890d7ffcce8e3f0c608219d432f3f64f3d0bdbc36db56620e1dfeaa9fe81: Status 404 returned error can't find the container with id a7a0890d7ffcce8e3f0c608219d432f3f64f3d0bdbc36db56620e1dfeaa9fe81 Oct 14 13:08:49.467606 master-1 kubenswrapper[4740]: I1014 13:08:49.467552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:08:49.467918 master-1 kubenswrapper[4740]: E1014 13:08:49.467859 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:09:05.467824396 +0000 UTC m=+171.278113765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:08:49.827389 master-1 kubenswrapper[4740]: I1014 13:08:49.827291 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" event={"ID":"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6","Type":"ContainerStarted","Data":"a7a0890d7ffcce8e3f0c608219d432f3f64f3d0bdbc36db56620e1dfeaa9fe81"} Oct 14 13:08:51.309379 master-1 kubenswrapper[4740]: I1014 13:08:51.309320 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 14 13:08:51.309978 master-1 kubenswrapper[4740]: I1014 13:08:51.309932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.312364 master-1 kubenswrapper[4740]: I1014 13:08:51.312328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 13:08:51.316003 master-1 kubenswrapper[4740]: I1014 13:08:51.315957 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 14 13:08:51.495678 master-1 kubenswrapper[4740]: I1014 13:08:51.495627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.495908 master-1 kubenswrapper[4740]: I1014 13:08:51.495774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-var-lock\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.495908 master-1 kubenswrapper[4740]: I1014 13:08:51.495894 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dddfa29-2bde-416f-870d-c24a4c6c67db-kube-api-access\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.597418 master-1 kubenswrapper[4740]: I1014 13:08:51.597318 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.597647 master-1 kubenswrapper[4740]: I1014 13:08:51.597446 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.597647 master-1 kubenswrapper[4740]: I1014 13:08:51.597561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-var-lock\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.597784 master-1 kubenswrapper[4740]: I1014 13:08:51.597657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-var-lock\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.597784 master-1 kubenswrapper[4740]: I1014 13:08:51.597668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dddfa29-2bde-416f-870d-c24a4c6c67db-kube-api-access\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.616250 master-1 kubenswrapper[4740]: I1014 13:08:51.616147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dddfa29-2bde-416f-870d-c24a4c6c67db-kube-api-access\") pod \"installer-1-master-1\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.626414 master-1 kubenswrapper[4740]: I1014 13:08:51.626365 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:08:51.847119 master-1 kubenswrapper[4740]: I1014 13:08:51.846948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zbv7v" event={"ID":"f553d2c5-b9fb-49b5-baac-00d3384d6478","Type":"ContainerStarted","Data":"70db9fb715b0b5c956083d385ddcef2577bb0498cb41988e8e6ed3611b881f28"} Oct 14 13:08:51.852101 master-1 kubenswrapper[4740]: I1014 13:08:51.850835 4740 generic.go:334] "Generic (PLEG): container finished" podID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerID="50e09bd480a9486fece5adcc3edd27b4717e755898d98236cb8e5ad7102da2a0" exitCode=0 Oct 14 13:08:51.852101 master-1 kubenswrapper[4740]: I1014 13:08:51.850883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerDied","Data":"50e09bd480a9486fece5adcc3edd27b4717e755898d98236cb8e5ad7102da2a0"} Oct 14 13:08:52.122573 master-1 kubenswrapper[4740]: I1014 13:08:52.122433 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 14 13:08:52.324914 master-1 kubenswrapper[4740]: W1014 13:08:52.324864 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8dddfa29_2bde_416f_870d_c24a4c6c67db.slice/crio-dcac79a41e252093856baafe6af533c786dff50094089580f33f9266280a4f91 WatchSource:0}: Error finding container dcac79a41e252093856baafe6af533c786dff50094089580f33f9266280a4f91: Status 404 returned error can't find the container with id dcac79a41e252093856baafe6af533c786dff50094089580f33f9266280a4f91 Oct 14 13:08:52.859507 master-1 kubenswrapper[4740]: I1014 13:08:52.859005 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"8dddfa29-2bde-416f-870d-c24a4c6c67db","Type":"ContainerStarted","Data":"981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d"} Oct 14 13:08:52.859507 master-1 kubenswrapper[4740]: I1014 13:08:52.859053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"8dddfa29-2bde-416f-870d-c24a4c6c67db","Type":"ContainerStarted","Data":"dcac79a41e252093856baafe6af533c786dff50094089580f33f9266280a4f91"} Oct 14 13:08:52.860956 master-1 kubenswrapper[4740]: I1014 13:08:52.860886 4740 generic.go:334] "Generic (PLEG): container finished" podID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerID="994a341162264e39b9c97158b4e18868680b0687f0b6a63a8495aa495b95e9e1" exitCode=0 Oct 14 13:08:52.861014 master-1 kubenswrapper[4740]: I1014 13:08:52.860968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" event={"ID":"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6","Type":"ContainerDied","Data":"994a341162264e39b9c97158b4e18868680b0687f0b6a63a8495aa495b95e9e1"} Oct 14 13:08:52.863020 master-1 kubenswrapper[4740]: I1014 13:08:52.862993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zbv7v" event={"ID":"f553d2c5-b9fb-49b5-baac-00d3384d6478","Type":"ContainerStarted","Data":"900d8cb5297ebc15aa2c3223879a54e4869748ae3109f385eea1d7bb43858b08"} Oct 14 13:08:52.863292 master-1 kubenswrapper[4740]: I1014 13:08:52.863252 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zbv7v" Oct 14 13:08:52.867694 master-1 kubenswrapper[4740]: I1014 13:08:52.867631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerStarted","Data":"2a4c2ed2bbbd4797e6180de90b1ee5e438d370126f0614ca02705325ec43d7bf"} Oct 14 13:08:52.878377 master-1 kubenswrapper[4740]: I1014 13:08:52.878322 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-1" podStartSLOduration=1.8783086500000001 podStartE2EDuration="1.87830865s" podCreationTimestamp="2025-10-14 13:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:52.876669205 +0000 UTC m=+158.686958574" watchObservedRunningTime="2025-10-14 13:08:52.87830865 +0000 UTC m=+158.688597979" Oct 14 13:08:52.918717 master-1 kubenswrapper[4740]: I1014 13:08:52.918528 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zbv7v" podStartSLOduration=2.247633283 podStartE2EDuration="5.918514492s" podCreationTimestamp="2025-10-14 13:08:47 +0000 UTC" firstStartedPulling="2025-10-14 13:08:47.732210205 +0000 UTC m=+153.542499534" lastFinishedPulling="2025-10-14 13:08:51.403091394 +0000 UTC m=+157.213380743" observedRunningTime="2025-10-14 13:08:52.91666327 +0000 UTC m=+158.726952649" watchObservedRunningTime="2025-10-14 13:08:52.918514492 +0000 UTC m=+158.728803821" Oct 14 13:08:53.225790 master-1 kubenswrapper[4740]: I1014 13:08:53.225725 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 14 13:08:53.225999 master-1 kubenswrapper[4740]: I1014 13:08:53.225955 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-1" podUID="b1c6b650-cfb9-4098-8d7b-43e9735daa7e" containerName="installer" containerID="cri-o://8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3" gracePeriod=30 Oct 14 13:08:53.874621 master-1 kubenswrapper[4740]: I1014 13:08:53.874567 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" event={"ID":"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6","Type":"ContainerStarted","Data":"194ea90143b4d79876e5b96800a908311ed2f6a1f27daf72bfecc0523fd85c7f"} Oct 14 13:08:53.877672 master-1 kubenswrapper[4740]: I1014 13:08:53.877598 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerStarted","Data":"12d72bb9d4324b183104d8033fbb4b64412be63d92c608ad75fd099e5f63f4a7"} Oct 14 13:08:53.902118 master-1 kubenswrapper[4740]: I1014 13:08:53.901963 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podStartSLOduration=2.987556531 podStartE2EDuration="5.901935507s" podCreationTimestamp="2025-10-14 13:08:48 +0000 UTC" firstStartedPulling="2025-10-14 13:08:49.463487866 +0000 UTC m=+155.273777235" lastFinishedPulling="2025-10-14 13:08:52.377866892 +0000 UTC m=+158.188156211" observedRunningTime="2025-10-14 13:08:53.89808189 +0000 UTC m=+159.708371269" watchObservedRunningTime="2025-10-14 13:08:53.901935507 +0000 UTC m=+159.712224866" Oct 14 13:08:53.930177 master-1 kubenswrapper[4740]: I1014 13:08:53.930080 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podStartSLOduration=10.84774549 podStartE2EDuration="15.930056714s" podCreationTimestamp="2025-10-14 13:08:38 +0000 UTC" firstStartedPulling="2025-10-14 13:08:46.348672431 +0000 UTC m=+152.158961760" lastFinishedPulling="2025-10-14 13:08:51.430983635 +0000 UTC m=+157.241272984" observedRunningTime="2025-10-14 13:08:53.926187997 +0000 UTC m=+159.736477406" watchObservedRunningTime="2025-10-14 13:08:53.930056714 +0000 UTC m=+159.740346073" Oct 14 13:08:54.026872 master-1 kubenswrapper[4740]: I1014 13:08:54.026803 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:54.026872 master-1 kubenswrapper[4740]: I1014 13:08:54.026878 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:54.040909 master-1 kubenswrapper[4740]: I1014 13:08:54.040849 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:54.892909 master-1 kubenswrapper[4740]: I1014 13:08:54.892808 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:08:55.077526 master-1 kubenswrapper[4740]: I1014 13:08:55.077422 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:55.077526 master-1 kubenswrapper[4740]: I1014 13:08:55.077492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: I1014 13:08:55.087898 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]etcd ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:08:55.088032 master-1 kubenswrapper[4740]: livez check failed Oct 14 13:08:55.089181 master-1 kubenswrapper[4740]: I1014 13:08:55.088064 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:08:55.633564 master-1 kubenswrapper[4740]: I1014 13:08:55.633481 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 14 13:08:55.634372 master-1 kubenswrapper[4740]: I1014 13:08:55.634323 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.710831 master-1 kubenswrapper[4740]: I1014 13:08:55.643431 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 14 13:08:55.710831 master-1 kubenswrapper[4740]: I1014 13:08:55.657261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22084985-e5ef-4430-89e7-fb673e7c928f-kube-api-access\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.710831 master-1 kubenswrapper[4740]: I1014 13:08:55.657336 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-var-lock\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.710831 master-1 kubenswrapper[4740]: I1014 13:08:55.657373 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.758897 master-1 kubenswrapper[4740]: I1014 13:08:55.758805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22084985-e5ef-4430-89e7-fb673e7c928f-kube-api-access\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.758897 master-1 kubenswrapper[4740]: I1014 13:08:55.758908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-var-lock\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.759160 master-1 kubenswrapper[4740]: I1014 13:08:55.758938 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.759160 master-1 kubenswrapper[4740]: I1014 13:08:55.759120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-var-lock\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.759344 master-1 kubenswrapper[4740]: I1014 13:08:55.759179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:55.780362 master-1 kubenswrapper[4740]: I1014 13:08:55.779279 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22084985-e5ef-4430-89e7-fb673e7c928f-kube-api-access\") pod \"installer-2-master-1\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:56.030981 master-1 kubenswrapper[4740]: I1014 13:08:56.030891 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:08:56.464637 master-1 kubenswrapper[4740]: I1014 13:08:56.464525 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 14 13:08:56.471905 master-1 kubenswrapper[4740]: W1014 13:08:56.471845 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod22084985_e5ef_4430_89e7_fb673e7c928f.slice/crio-a6a02d31ae3a3523aae3b338126afd13d472eaba377f8cf580c4f494e2103501 WatchSource:0}: Error finding container a6a02d31ae3a3523aae3b338126afd13d472eaba377f8cf580c4f494e2103501: Status 404 returned error can't find the container with id a6a02d31ae3a3523aae3b338126afd13d472eaba377f8cf580c4f494e2103501 Oct 14 13:08:56.896014 master-1 kubenswrapper[4740]: I1014 13:08:56.895914 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"22084985-e5ef-4430-89e7-fb673e7c928f","Type":"ContainerStarted","Data":"7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb"} Oct 14 13:08:56.896014 master-1 kubenswrapper[4740]: I1014 13:08:56.895996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"22084985-e5ef-4430-89e7-fb673e7c928f","Type":"ContainerStarted","Data":"a6a02d31ae3a3523aae3b338126afd13d472eaba377f8cf580c4f494e2103501"} Oct 14 13:08:58.566317 master-1 kubenswrapper[4740]: I1014 13:08:58.566223 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-sndvg" Oct 14 13:08:58.586782 master-1 kubenswrapper[4740]: I1014 13:08:58.586680 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-1" podStartSLOduration=3.586658582 podStartE2EDuration="3.586658582s" podCreationTimestamp="2025-10-14 13:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:08:56.914468145 +0000 UTC m=+162.724757504" watchObservedRunningTime="2025-10-14 13:08:58.586658582 +0000 UTC m=+164.396947941" Oct 14 13:09:00.085493 master-1 kubenswrapper[4740]: I1014 13:09:00.085392 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:09:00.093183 master-1 kubenswrapper[4740]: I1014 13:09:00.093118 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:09:01.978858 master-1 kubenswrapper[4740]: I1014 13:09:01.978777 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/0.log" Oct 14 13:09:02.186204 master-1 kubenswrapper[4740]: I1014 13:09:02.186104 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/1.log" Oct 14 13:09:02.493687 master-1 kubenswrapper[4740]: I1014 13:09:02.493571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zbv7v" Oct 14 13:09:03.179103 master-1 kubenswrapper[4740]: I1014 13:09:03.179008 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c57444595-zs4m8_57cd904e-5dfb-4cc1-8bd8-8adf12b276c6/fix-audit-permissions/0.log" Oct 14 13:09:03.387070 master-1 kubenswrapper[4740]: I1014 13:09:03.386953 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c57444595-zs4m8_57cd904e-5dfb-4cc1-8bd8-8adf12b276c6/oauth-apiserver/0.log" Oct 14 13:09:03.429673 master-1 kubenswrapper[4740]: I1014 13:09:03.429469 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 14 13:09:03.429983 master-1 kubenswrapper[4740]: I1014 13:09:03.429767 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-1" podUID="22084985-e5ef-4430-89e7-fb673e7c928f" containerName="installer" containerID="cri-o://7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb" gracePeriod=30 Oct 14 13:09:03.472098 master-1 kubenswrapper[4740]: I1014 13:09:03.472001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:09:03.472098 master-1 kubenswrapper[4740]: I1014 13:09:03.472070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:09:03.472494 master-1 kubenswrapper[4740]: I1014 13:09:03.472165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:09:03.477941 master-1 kubenswrapper[4740]: I1014 13:09:03.477887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4ca808a-394d-4a17-ac12-1df264c7ed92-proxy-tls\") pod \"machine-config-operator-7b75469658-j2dbc\" (UID: \"c4ca808a-394d-4a17-ac12-1df264c7ed92\") " pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:09:03.478645 master-1 kubenswrapper[4740]: I1014 13:09:03.478589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7be129fe-d04d-4384-a0e9-76b3148a1f3e-package-server-manager-serving-cert\") pod \"package-server-manager-798cc87f55-j2bjv\" (UID: \"7be129fe-d04d-4384-a0e9-76b3148a1f3e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:09:03.478760 master-1 kubenswrapper[4740]: I1014 13:09:03.478585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/62ef5e24-de36-454a-a34c-e741a86a6f96-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-5b5dd85dcc-cxtgh\" (UID: \"62ef5e24-de36-454a-a34c-e741a86a6f96\") " pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:09:03.532359 master-1 kubenswrapper[4740]: I1014 13:09:03.532297 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" Oct 14 13:09:03.574529 master-1 kubenswrapper[4740]: I1014 13:09:03.574470 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:09:03.574529 master-1 kubenswrapper[4740]: I1014 13:09:03.574528 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:09:03.574824 master-1 kubenswrapper[4740]: I1014 13:09:03.574564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:09:03.574824 master-1 kubenswrapper[4740]: I1014 13:09:03.574606 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:09:03.574824 master-1 kubenswrapper[4740]: I1014 13:09:03.574691 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:09:03.579621 master-1 kubenswrapper[4740]: I1014 13:09:03.579562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d292fbb-b49c-4543-993b-738103c7419b-srv-cert\") pod \"catalog-operator-f966fb6f8-dwwm2\" (UID: \"3d292fbb-b49c-4543-993b-738103c7419b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:09:03.580280 master-1 kubenswrapper[4740]: I1014 13:09:03.580186 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-mgc7h\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:09:03.580671 master-1 kubenswrapper[4740]: I1014 13:09:03.580603 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57526e49-7f51-4a66-8f48-0c485fc1e88f-srv-cert\") pod \"olm-operator-867f8475d9-fl56c\" (UID: \"57526e49-7f51-4a66-8f48-0c485fc1e88f\") " pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:09:03.580950 master-1 kubenswrapper[4740]: I1014 13:09:03.580869 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a106ff8-388a-4d30-8370-aad661eb4365-marketplace-operator-metrics\") pod \"marketplace-operator-c4f798dd4-djh96\" (UID: \"2a106ff8-388a-4d30-8370-aad661eb4365\") " pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:09:03.581057 master-1 kubenswrapper[4740]: I1014 13:09:03.580958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"multus-admission-controller-77b66fddc8-9npgz\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:09:03.583891 master-1 kubenswrapper[4740]: I1014 13:09:03.583804 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-nh2qc_910af03d-893a-443d-b6ed-fe21c26951f4/dns-operator/0.log" Oct 14 13:09:03.596084 master-1 kubenswrapper[4740]: I1014 13:09:03.596016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:09:03.644975 master-1 kubenswrapper[4740]: I1014 13:09:03.644792 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:09:03.659280 master-1 kubenswrapper[4740]: I1014 13:09:03.659177 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" Oct 14 13:09:03.674914 master-1 kubenswrapper[4740]: I1014 13:09:03.674299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:09:03.681869 master-1 kubenswrapper[4740]: I1014 13:09:03.681701 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:09:03.781668 master-1 kubenswrapper[4740]: I1014 13:09:03.781605 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-nh2qc_910af03d-893a-443d-b6ed-fe21c26951f4/kube-rbac-proxy/0.log" Oct 14 13:09:03.812294 master-1 kubenswrapper[4740]: I1014 13:09:03.811378 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:09:03.865308 master-1 kubenswrapper[4740]: I1014 13:09:03.862718 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:09:03.888555 master-1 kubenswrapper[4740]: I1014 13:09:03.888523 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-1_22084985-e5ef-4430-89e7-fb673e7c928f/installer/0.log" Oct 14 13:09:03.888635 master-1 kubenswrapper[4740]: I1014 13:09:03.888598 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:09:03.940131 master-1 kubenswrapper[4740]: I1014 13:09:03.940093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-1_22084985-e5ef-4430-89e7-fb673e7c928f/installer/0.log" Oct 14 13:09:03.940218 master-1 kubenswrapper[4740]: I1014 13:09:03.940158 4740 generic.go:334] "Generic (PLEG): container finished" podID="22084985-e5ef-4430-89e7-fb673e7c928f" containerID="7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb" exitCode=1 Oct 14 13:09:03.940218 master-1 kubenswrapper[4740]: I1014 13:09:03.940189 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"22084985-e5ef-4430-89e7-fb673e7c928f","Type":"ContainerDied","Data":"7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb"} Oct 14 13:09:03.940300 master-1 kubenswrapper[4740]: I1014 13:09:03.940242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-1" event={"ID":"22084985-e5ef-4430-89e7-fb673e7c928f","Type":"ContainerDied","Data":"a6a02d31ae3a3523aae3b338126afd13d472eaba377f8cf580c4f494e2103501"} Oct 14 13:09:03.940300 master-1 kubenswrapper[4740]: I1014 13:09:03.940262 4740 scope.go:117] "RemoveContainer" containerID="7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb" Oct 14 13:09:03.940300 master-1 kubenswrapper[4740]: I1014 13:09:03.940264 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-1" Oct 14 13:09:03.952863 master-1 kubenswrapper[4740]: I1014 13:09:03.952812 4740 scope.go:117] "RemoveContainer" containerID="7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb" Oct 14 13:09:03.953278 master-1 kubenswrapper[4740]: E1014 13:09:03.953222 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb\": container with ID starting with 7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb not found: ID does not exist" containerID="7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb" Oct 14 13:09:03.954395 master-1 kubenswrapper[4740]: I1014 13:09:03.953290 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb"} err="failed to get container status \"7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb\": rpc error: code = NotFound desc = could not find container \"7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb\": container with ID starting with 7d7a49621ee4f8f0307ab8b8a6c69aa2f3e7355493783469b338a75664313bcb not found: ID does not exist" Oct 14 13:09:03.979498 master-1 kubenswrapper[4740]: I1014 13:09:03.979439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22084985-e5ef-4430-89e7-fb673e7c928f-kube-api-access\") pod \"22084985-e5ef-4430-89e7-fb673e7c928f\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " Oct 14 13:09:03.979498 master-1 kubenswrapper[4740]: I1014 13:09:03.979497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-var-lock\") pod \"22084985-e5ef-4430-89e7-fb673e7c928f\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " Oct 14 13:09:03.979684 master-1 kubenswrapper[4740]: I1014 13:09:03.979538 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-kubelet-dir\") pod \"22084985-e5ef-4430-89e7-fb673e7c928f\" (UID: \"22084985-e5ef-4430-89e7-fb673e7c928f\") " Oct 14 13:09:03.979854 master-1 kubenswrapper[4740]: I1014 13:09:03.979825 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22084985-e5ef-4430-89e7-fb673e7c928f" (UID: "22084985-e5ef-4430-89e7-fb673e7c928f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:03.980203 master-1 kubenswrapper[4740]: I1014 13:09:03.980145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-var-lock" (OuterVolumeSpecName: "var-lock") pod "22084985-e5ef-4430-89e7-fb673e7c928f" (UID: "22084985-e5ef-4430-89e7-fb673e7c928f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:03.997129 master-1 kubenswrapper[4740]: I1014 13:09:03.989958 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22084985-e5ef-4430-89e7-fb673e7c928f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22084985-e5ef-4430-89e7-fb673e7c928f" (UID: "22084985-e5ef-4430-89e7-fb673e7c928f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:09:04.067367 master-1 kubenswrapper[4740]: I1014 13:09:04.062531 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh"] Oct 14 13:09:04.075581 master-1 kubenswrapper[4740]: I1014 13:09:04.068615 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-9npgz"] Oct 14 13:09:04.075581 master-1 kubenswrapper[4740]: W1014 13:09:04.070602 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01742ba1_f43b_4ff2_97d5_1a535e925a0f.slice/crio-48399003deb36067da52769965d5af83e6a3b7ae56320e44fc673696139e5026 WatchSource:0}: Error finding container 48399003deb36067da52769965d5af83e6a3b7ae56320e44fc673696139e5026: Status 404 returned error can't find the container with id 48399003deb36067da52769965d5af83e6a3b7ae56320e44fc673696139e5026 Oct 14 13:09:04.075581 master-1 kubenswrapper[4740]: W1014 13:09:04.073830 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62ef5e24_de36_454a_a34c_e741a86a6f96.slice/crio-f635dd13e8d4226da1f045250e597592b8be1de1fa7f36d2b3e82071548e0c21 WatchSource:0}: Error finding container f635dd13e8d4226da1f045250e597592b8be1de1fa7f36d2b3e82071548e0c21: Status 404 returned error can't find the container with id f635dd13e8d4226da1f045250e597592b8be1de1fa7f36d2b3e82071548e0c21 Oct 14 13:09:04.082543 master-1 kubenswrapper[4740]: I1014 13:09:04.080536 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22084985-e5ef-4430-89e7-fb673e7c928f-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:04.082543 master-1 kubenswrapper[4740]: I1014 13:09:04.080562 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:04.082543 master-1 kubenswrapper[4740]: I1014 13:09:04.080583 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22084985-e5ef-4430-89e7-fb673e7c928f-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:04.249333 master-1 kubenswrapper[4740]: I1014 13:09:04.246342 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc"] Oct 14 13:09:04.250613 master-1 kubenswrapper[4740]: I1014 13:09:04.250581 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c"] Oct 14 13:09:04.268901 master-1 kubenswrapper[4740]: I1014 13:09:04.268802 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 14 13:09:04.271147 master-1 kubenswrapper[4740]: I1014 13:09:04.271094 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-1"] Oct 14 13:09:04.272820 master-1 kubenswrapper[4740]: I1014 13:09:04.272770 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2"] Oct 14 13:09:04.376116 master-1 kubenswrapper[4740]: I1014 13:09:04.376064 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-zbv7v_f553d2c5-b9fb-49b5-baac-00d3384d6478/dns/0.log" Oct 14 13:09:04.388557 master-1 kubenswrapper[4740]: I1014 13:09:04.388521 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-mgc7h"] Oct 14 13:09:04.404483 master-1 kubenswrapper[4740]: I1014 13:09:04.404391 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-c4f798dd4-djh96"] Oct 14 13:09:04.406811 master-1 kubenswrapper[4740]: I1014 13:09:04.406770 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv"] Oct 14 13:09:04.526160 master-1 kubenswrapper[4740]: W1014 13:09:04.526003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ca808a_394d_4a17_ac12_1df264c7ed92.slice/crio-fc4f6fc0b41447ccf31f7dae4064d48278d4e5257e6dd36fa7229789a1806b57 WatchSource:0}: Error finding container fc4f6fc0b41447ccf31f7dae4064d48278d4e5257e6dd36fa7229789a1806b57: Status 404 returned error can't find the container with id fc4f6fc0b41447ccf31f7dae4064d48278d4e5257e6dd36fa7229789a1806b57 Oct 14 13:09:04.528139 master-1 kubenswrapper[4740]: W1014 13:09:04.528047 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57526e49_7f51_4a66_8f48_0c485fc1e88f.slice/crio-4bd561f33082f01aa134126641f1b90e487a123eaabcb9516e36534e83031032 WatchSource:0}: Error finding container 4bd561f33082f01aa134126641f1b90e487a123eaabcb9516e36534e83031032: Status 404 returned error can't find the container with id 4bd561f33082f01aa134126641f1b90e487a123eaabcb9516e36534e83031032 Oct 14 13:09:04.531237 master-1 kubenswrapper[4740]: W1014 13:09:04.531187 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d292fbb_b49c_4543_993b_738103c7419b.slice/crio-1eca57a0e395d08227c93f989036500ab31ec5c67ee75e8ba9591a4bdbbb1a16 WatchSource:0}: Error finding container 1eca57a0e395d08227c93f989036500ab31ec5c67ee75e8ba9591a4bdbbb1a16: Status 404 returned error can't find the container with id 1eca57a0e395d08227c93f989036500ab31ec5c67ee75e8ba9591a4bdbbb1a16 Oct 14 13:09:04.536915 master-1 kubenswrapper[4740]: W1014 13:09:04.536887 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec085d84_4833_4e0b_9e6a_35b983a7059b.slice/crio-5ecf35a02f431bb4456c5b0413049c600db729de59229f6510f04427ca56460a WatchSource:0}: Error finding container 5ecf35a02f431bb4456c5b0413049c600db729de59229f6510f04427ca56460a: Status 404 returned error can't find the container with id 5ecf35a02f431bb4456c5b0413049c600db729de59229f6510f04427ca56460a Oct 14 13:09:04.538703 master-1 kubenswrapper[4740]: W1014 13:09:04.538674 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a106ff8_388a_4d30_8370_aad661eb4365.slice/crio-9460c3fba9db000314222faeb6aa83c96fec2ecbb784508f2fdf4a1a79ed5dd3 WatchSource:0}: Error finding container 9460c3fba9db000314222faeb6aa83c96fec2ecbb784508f2fdf4a1a79ed5dd3: Status 404 returned error can't find the container with id 9460c3fba9db000314222faeb6aa83c96fec2ecbb784508f2fdf4a1a79ed5dd3 Oct 14 13:09:04.540500 master-1 kubenswrapper[4740]: W1014 13:09:04.540429 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be129fe_d04d_4384_a0e9_76b3148a1f3e.slice/crio-0657cdb3586784a8cbefa741c9ec0587f0136599014df5a5550e01212fbc51db WatchSource:0}: Error finding container 0657cdb3586784a8cbefa741c9ec0587f0136599014df5a5550e01212fbc51db: Status 404 returned error can't find the container with id 0657cdb3586784a8cbefa741c9ec0587f0136599014df5a5550e01212fbc51db Oct 14 13:09:04.579902 master-1 kubenswrapper[4740]: I1014 13:09:04.579850 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-zbv7v_f553d2c5-b9fb-49b5-baac-00d3384d6478/kube-rbac-proxy/0.log" Oct 14 13:09:04.953716 master-1 kubenswrapper[4740]: I1014 13:09:04.953660 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22084985-e5ef-4430-89e7-fb673e7c928f" path="/var/lib/kubelet/pods/22084985-e5ef-4430-89e7-fb673e7c928f/volumes" Oct 14 13:09:04.954221 master-1 kubenswrapper[4740]: I1014 13:09:04.954183 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" event={"ID":"c4ca808a-394d-4a17-ac12-1df264c7ed92","Type":"ContainerStarted","Data":"03bc7f17d840e194f8a3189e59478dfe18d10f219bd7a10e505776173717ffe1"} Oct 14 13:09:04.954221 master-1 kubenswrapper[4740]: I1014 13:09:04.954215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" event={"ID":"c4ca808a-394d-4a17-ac12-1df264c7ed92","Type":"ContainerStarted","Data":"b61df5cfa8541e3132f5a70893b90c6aeb0cc1ace2485b37f230173855705d39"} Oct 14 13:09:04.954370 master-1 kubenswrapper[4740]: I1014 13:09:04.954259 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" event={"ID":"c4ca808a-394d-4a17-ac12-1df264c7ed92","Type":"ContainerStarted","Data":"fc4f6fc0b41447ccf31f7dae4064d48278d4e5257e6dd36fa7229789a1806b57"} Oct 14 13:09:04.954370 master-1 kubenswrapper[4740]: I1014 13:09:04.954270 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" event={"ID":"62ef5e24-de36-454a-a34c-e741a86a6f96","Type":"ContainerStarted","Data":"f635dd13e8d4226da1f045250e597592b8be1de1fa7f36d2b3e82071548e0c21"} Oct 14 13:09:04.954370 master-1 kubenswrapper[4740]: I1014 13:09:04.954280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" event={"ID":"57526e49-7f51-4a66-8f48-0c485fc1e88f","Type":"ContainerStarted","Data":"4bd561f33082f01aa134126641f1b90e487a123eaabcb9516e36534e83031032"} Oct 14 13:09:04.956454 master-1 kubenswrapper[4740]: I1014 13:09:04.956400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" event={"ID":"01742ba1-f43b-4ff2-97d5-1a535e925a0f","Type":"ContainerStarted","Data":"48399003deb36067da52769965d5af83e6a3b7ae56320e44fc673696139e5026"} Oct 14 13:09:04.957602 master-1 kubenswrapper[4740]: I1014 13:09:04.957563 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" event={"ID":"2a106ff8-388a-4d30-8370-aad661eb4365","Type":"ContainerStarted","Data":"9460c3fba9db000314222faeb6aa83c96fec2ecbb784508f2fdf4a1a79ed5dd3"} Oct 14 13:09:04.959346 master-1 kubenswrapper[4740]: I1014 13:09:04.959292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" event={"ID":"7be129fe-d04d-4384-a0e9-76b3148a1f3e","Type":"ContainerStarted","Data":"c23f3a4026fc0a7ab51b81fc2d31a5ff3566b79717b054e0a3542283bb716e72"} Oct 14 13:09:04.959346 master-1 kubenswrapper[4740]: I1014 13:09:04.959340 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" event={"ID":"7be129fe-d04d-4384-a0e9-76b3148a1f3e","Type":"ContainerStarted","Data":"0657cdb3586784a8cbefa741c9ec0587f0136599014df5a5550e01212fbc51db"} Oct 14 13:09:04.960828 master-1 kubenswrapper[4740]: I1014 13:09:04.960789 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" event={"ID":"3d292fbb-b49c-4543-993b-738103c7419b","Type":"ContainerStarted","Data":"1eca57a0e395d08227c93f989036500ab31ec5c67ee75e8ba9591a4bdbbb1a16"} Oct 14 13:09:04.961989 master-1 kubenswrapper[4740]: I1014 13:09:04.961949 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" event={"ID":"ec085d84-4833-4e0b-9e6a-35b983a7059b","Type":"ContainerStarted","Data":"5ecf35a02f431bb4456c5b0413049c600db729de59229f6510f04427ca56460a"} Oct 14 13:09:04.982763 master-1 kubenswrapper[4740]: I1014 13:09:04.982645 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-lhshc_dc3c6b11-2798-41ca-8a29-2f4c99b0fa68/dns-node-resolver/0.log" Oct 14 13:09:05.088072 master-1 kubenswrapper[4740]: I1014 13:09:05.087927 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" podStartSLOduration=157.087910585 podStartE2EDuration="2m37.087910585s" podCreationTimestamp="2025-10-14 13:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:05.084951103 +0000 UTC m=+170.895240442" watchObservedRunningTime="2025-10-14 13:09:05.087910585 +0000 UTC m=+170.898199924" Oct 14 13:09:05.183459 master-1 kubenswrapper[4740]: I1014 13:09:05.183397 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/0.log" Oct 14 13:09:05.378356 master-1 kubenswrapper[4740]: I1014 13:09:05.378248 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-1_b61b7a8e-e2be-4f11-a659-1919213dda51/installer/0.log" Oct 14 13:09:05.410142 master-1 kubenswrapper[4740]: I1014 13:09:05.410091 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:09:05.410417 master-1 kubenswrapper[4740]: E1014 13:09:05.410253 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:09:37.410219171 +0000 UTC m=+203.220508500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:09:05.511422 master-1 kubenswrapper[4740]: I1014 13:09:05.511339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:09:05.511621 master-1 kubenswrapper[4740]: E1014 13:09:05.511509 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:09:37.511488411 +0000 UTC m=+203.321777740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:09:05.512275 master-1 kubenswrapper[4740]: I1014 13:09:05.512251 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 14 13:09:05.512470 master-1 kubenswrapper[4740]: I1014 13:09:05.512405 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-1" podUID="8dddfa29-2bde-416f-870d-c24a4c6c67db" containerName="installer" containerID="cri-o://981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d" gracePeriod=30 Oct 14 13:09:05.580045 master-1 kubenswrapper[4740]: I1014 13:09:05.580005 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/0.log" Oct 14 13:09:05.777782 master-1 kubenswrapper[4740]: I1014 13:09:05.777742 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/kube-rbac-proxy/0.log" Oct 14 13:09:06.022734 master-1 kubenswrapper[4740]: I1014 13:09:06.022668 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 14 13:09:06.022957 master-1 kubenswrapper[4740]: E1014 13:09:06.022876 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22084985-e5ef-4430-89e7-fb673e7c928f" containerName="installer" Oct 14 13:09:06.022957 master-1 kubenswrapper[4740]: I1014 13:09:06.022888 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="22084985-e5ef-4430-89e7-fb673e7c928f" containerName="installer" Oct 14 13:09:06.023090 master-1 kubenswrapper[4740]: I1014 13:09:06.022986 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="22084985-e5ef-4430-89e7-fb673e7c928f" containerName="installer" Oct 14 13:09:06.023354 master-1 kubenswrapper[4740]: I1014 13:09:06.023330 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.046973 master-1 kubenswrapper[4740]: I1014 13:09:06.032262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 14 13:09:06.219713 master-1 kubenswrapper[4740]: I1014 13:09:06.219662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-var-lock\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.219713 master-1 kubenswrapper[4740]: I1014 13:09:06.219712 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kube-api-access\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.219923 master-1 kubenswrapper[4740]: I1014 13:09:06.219763 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.320962 master-1 kubenswrapper[4740]: I1014 13:09:06.320851 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kube-api-access\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.320962 master-1 kubenswrapper[4740]: I1014 13:09:06.320909 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.321152 master-1 kubenswrapper[4740]: I1014 13:09:06.321026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-var-lock\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.321152 master-1 kubenswrapper[4740]: I1014 13:09:06.321035 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.321152 master-1 kubenswrapper[4740]: I1014 13:09:06.321129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-var-lock\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.357701 master-1 kubenswrapper[4740]: I1014 13:09:06.357668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kube-api-access\") pod \"installer-3-master-1\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.358386 master-1 kubenswrapper[4740]: I1014 13:09:06.358222 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:06.380460 master-1 kubenswrapper[4740]: I1014 13:09:06.380413 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-bqdtw_15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c/kube-apiserver-operator/0.log" Oct 14 13:09:06.579351 master-1 kubenswrapper[4740]: I1014 13:09:06.579201 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_8dddfa29-2bde-416f-870d-c24a4c6c67db/installer/0.log" Oct 14 13:09:06.780153 master-1 kubenswrapper[4740]: I1014 13:09:06.780091 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-ppzvt_772f8774-25f4-4987-bd40-8f3adda97e8b/kube-controller-manager-operator/0.log" Oct 14 13:09:06.979046 master-1 kubenswrapper[4740]: I1014 13:09:06.978916 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-1_b1c6b650-cfb9-4098-8d7b-43e9735daa7e/installer/0.log" Oct 14 13:09:07.381156 master-1 kubenswrapper[4740]: I1014 13:09:07.381097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-gtvcp_ec50d087-259f-45c0-a15a-7fe949ae66dd/kube-scheduler-operator-container/0.log" Oct 14 13:09:08.161695 master-1 kubenswrapper[4740]: I1014 13:09:08.161655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-49h5v"] Oct 14 13:09:08.162569 master-1 kubenswrapper[4740]: I1014 13:09:08.162543 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.165770 master-1 kubenswrapper[4740]: I1014 13:09:08.165719 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 14 13:09:08.295201 master-1 kubenswrapper[4740]: I1014 13:09:08.295136 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 14 13:09:08.346793 master-1 kubenswrapper[4740]: I1014 13:09:08.346686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c65eca1-8a10-4132-8b45-a9ba45044e18-rootfs\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.346793 master-1 kubenswrapper[4740]: I1014 13:09:08.346776 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c65eca1-8a10-4132-8b45-a9ba45044e18-proxy-tls\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.346964 master-1 kubenswrapper[4740]: I1014 13:09:08.346821 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhgr\" (UniqueName: \"kubernetes.io/projected/3c65eca1-8a10-4132-8b45-a9ba45044e18-kube-api-access-fqhgr\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.346964 master-1 kubenswrapper[4740]: I1014 13:09:08.346906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c65eca1-8a10-4132-8b45-a9ba45044e18-mcd-auth-proxy-config\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.454039 master-1 kubenswrapper[4740]: I1014 13:09:08.453869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c65eca1-8a10-4132-8b45-a9ba45044e18-mcd-auth-proxy-config\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.454039 master-1 kubenswrapper[4740]: I1014 13:09:08.453950 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c65eca1-8a10-4132-8b45-a9ba45044e18-rootfs\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.454039 master-1 kubenswrapper[4740]: I1014 13:09:08.454000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c65eca1-8a10-4132-8b45-a9ba45044e18-proxy-tls\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.454039 master-1 kubenswrapper[4740]: I1014 13:09:08.454030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhgr\" (UniqueName: \"kubernetes.io/projected/3c65eca1-8a10-4132-8b45-a9ba45044e18-kube-api-access-fqhgr\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.455108 master-1 kubenswrapper[4740]: I1014 13:09:08.454398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3c65eca1-8a10-4132-8b45-a9ba45044e18-rootfs\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.455108 master-1 kubenswrapper[4740]: I1014 13:09:08.454709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c65eca1-8a10-4132-8b45-a9ba45044e18-mcd-auth-proxy-config\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.457492 master-1 kubenswrapper[4740]: I1014 13:09:08.457377 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c65eca1-8a10-4132-8b45-a9ba45044e18-proxy-tls\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.479564 master-1 kubenswrapper[4740]: I1014 13:09:08.479510 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhgr\" (UniqueName: \"kubernetes.io/projected/3c65eca1-8a10-4132-8b45-a9ba45044e18-kube-api-access-fqhgr\") pod \"machine-config-daemon-49h5v\" (UID: \"3c65eca1-8a10-4132-8b45-a9ba45044e18\") " pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.507959 master-1 kubenswrapper[4740]: I1014 13:09:08.507926 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 14 13:09:08.508585 master-1 kubenswrapper[4740]: I1014 13:09:08.508563 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.516060 master-1 kubenswrapper[4740]: I1014 13:09:08.514729 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 14 13:09:08.554553 master-1 kubenswrapper[4740]: I1014 13:09:08.554509 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2409a99-4fb0-44cb-a711-42808935cb31-kube-api-access\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.554646 master-1 kubenswrapper[4740]: I1014 13:09:08.554565 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.554646 master-1 kubenswrapper[4740]: I1014 13:09:08.554584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-var-lock\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.655392 master-1 kubenswrapper[4740]: I1014 13:09:08.655332 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2409a99-4fb0-44cb-a711-42808935cb31-kube-api-access\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.655461 master-1 kubenswrapper[4740]: I1014 13:09:08.655422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.655492 master-1 kubenswrapper[4740]: I1014 13:09:08.655460 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-var-lock\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.655636 master-1 kubenswrapper[4740]: I1014 13:09:08.655587 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.655679 master-1 kubenswrapper[4740]: I1014 13:09:08.655647 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-var-lock\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.672084 master-1 kubenswrapper[4740]: I1014 13:09:08.672019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2409a99-4fb0-44cb-a711-42808935cb31-kube-api-access\") pod \"installer-2-master-1\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.777770 master-1 kubenswrapper[4740]: I1014 13:09:08.777730 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-49h5v" Oct 14 13:09:08.778472 master-1 kubenswrapper[4740]: I1014 13:09:08.778451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/cluster-olm-operator/0.log" Oct 14 13:09:08.795593 master-1 kubenswrapper[4740]: W1014 13:09:08.795528 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c65eca1_8a10_4132_8b45_a9ba45044e18.slice/crio-54554190d7c606cc445d6afc78963a6d57a2cfddbef40a4746887bbfae8d75ce WatchSource:0}: Error finding container 54554190d7c606cc445d6afc78963a6d57a2cfddbef40a4746887bbfae8d75ce: Status 404 returned error can't find the container with id 54554190d7c606cc445d6afc78963a6d57a2cfddbef40a4746887bbfae8d75ce Oct 14 13:09:08.851670 master-1 kubenswrapper[4740]: I1014 13:09:08.851331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:08.977281 master-1 kubenswrapper[4740]: I1014 13:09:08.977201 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/copy-catalogd-manifests/0.log" Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.987577 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"0dfce785-d7d6-4abe-816a-ffe7a9ad980f","Type":"ContainerStarted","Data":"cde66cb045d8f62ec0c83968784efb957213310f7caee32b4091d5a6e87d6932"} Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.987630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"0dfce785-d7d6-4abe-816a-ffe7a9ad980f","Type":"ContainerStarted","Data":"3d649bea2321b015d028ef86754ea907b27028924d1e9793f0522fc54b6a6f25"} Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.993066 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" event={"ID":"ec085d84-4833-4e0b-9e6a-35b983a7059b","Type":"ContainerStarted","Data":"b571958693e1e882b82f62f00a695871bd2fb33a9bce37964d1fc0625a97ed39"} Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.993109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" event={"ID":"ec085d84-4833-4e0b-9e6a-35b983a7059b","Type":"ContainerStarted","Data":"67c17553d117fd8f968f52bb343a859674579a0e8b60300d9bbc090906179fe3"} Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.994811 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" event={"ID":"62ef5e24-de36-454a-a34c-e741a86a6f96","Type":"ContainerStarted","Data":"64e9ccd5a47921e0aa10ffbae49a79732284d2eb6bc3bc4c56cd762bbc2a693a"} Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.997090 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49h5v" event={"ID":"3c65eca1-8a10-4132-8b45-a9ba45044e18","Type":"ContainerStarted","Data":"25cd9fd044c92b41112956ff26ab8e017c20a5af8ea2ebddc18c71841954a412"} Oct 14 13:09:08.997677 master-1 kubenswrapper[4740]: I1014 13:09:08.997124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49h5v" event={"ID":"3c65eca1-8a10-4132-8b45-a9ba45044e18","Type":"ContainerStarted","Data":"54554190d7c606cc445d6afc78963a6d57a2cfddbef40a4746887bbfae8d75ce"} Oct 14 13:09:09.000183 master-1 kubenswrapper[4740]: I1014 13:09:09.000161 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" event={"ID":"01742ba1-f43b-4ff2-97d5-1a535e925a0f","Type":"ContainerStarted","Data":"dae508e34b6e62af530a4db5d6c36d51de02b0edd600811840e76a6649c9dd75"} Oct 14 13:09:09.000279 master-1 kubenswrapper[4740]: I1014 13:09:09.000186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" event={"ID":"01742ba1-f43b-4ff2-97d5-1a535e925a0f","Type":"ContainerStarted","Data":"5da5b33e2e38633a585455a99c0213bbadc15f83146f950b9753cdf3a2191d0a"} Oct 14 13:09:09.004929 master-1 kubenswrapper[4740]: I1014 13:09:09.004891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" event={"ID":"2a106ff8-388a-4d30-8370-aad661eb4365","Type":"ContainerStarted","Data":"103a0a432a550549596fe64f0652cd85127a6a4c94458fd9714e55d1dbc13041"} Oct 14 13:09:09.005955 master-1 kubenswrapper[4740]: I1014 13:09:09.005932 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:09:09.007749 master-1 kubenswrapper[4740]: I1014 13:09:09.007675 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-1" podStartSLOduration=3.007657883 podStartE2EDuration="3.007657883s" podCreationTimestamp="2025-10-14 13:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:09.005624246 +0000 UTC m=+174.815913575" watchObservedRunningTime="2025-10-14 13:09:09.007657883 +0000 UTC m=+174.817947212" Oct 14 13:09:09.013701 master-1 kubenswrapper[4740]: I1014 13:09:09.013662 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:09:09.030832 master-1 kubenswrapper[4740]: I1014 13:09:09.030602 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh" podStartSLOduration=161.224026527 podStartE2EDuration="2m45.030584816s" podCreationTimestamp="2025-10-14 13:06:24 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.076069255 +0000 UTC m=+169.886358594" lastFinishedPulling="2025-10-14 13:09:07.882627554 +0000 UTC m=+173.692916883" observedRunningTime="2025-10-14 13:09:09.028513199 +0000 UTC m=+174.838802538" watchObservedRunningTime="2025-10-14 13:09:09.030584816 +0000 UTC m=+174.840874155" Oct 14 13:09:09.044474 master-1 kubenswrapper[4740]: I1014 13:09:09.044406 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" podStartSLOduration=131.702746606 podStartE2EDuration="2m15.044391848s" podCreationTimestamp="2025-10-14 13:06:54 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.542127273 +0000 UTC m=+170.352416602" lastFinishedPulling="2025-10-14 13:09:07.883772515 +0000 UTC m=+173.694061844" observedRunningTime="2025-10-14 13:09:09.043923075 +0000 UTC m=+174.854212424" watchObservedRunningTime="2025-10-14 13:09:09.044391848 +0000 UTC m=+174.854681177" Oct 14 13:09:09.060977 master-1 kubenswrapper[4740]: I1014 13:09:09.060900 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" podStartSLOduration=114.719637973 podStartE2EDuration="1m58.060877994s" podCreationTimestamp="2025-10-14 13:07:11 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.541388463 +0000 UTC m=+170.351677812" lastFinishedPulling="2025-10-14 13:09:07.882628504 +0000 UTC m=+173.692917833" observedRunningTime="2025-10-14 13:09:09.058668392 +0000 UTC m=+174.868957721" watchObservedRunningTime="2025-10-14 13:09:09.060877994 +0000 UTC m=+174.871167323" Oct 14 13:09:09.080746 master-1 kubenswrapper[4740]: I1014 13:09:09.079038 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" podStartSLOduration=114.255690252 podStartE2EDuration="1m58.079012205s" podCreationTimestamp="2025-10-14 13:07:11 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.072396103 +0000 UTC m=+169.882685422" lastFinishedPulling="2025-10-14 13:09:07.895718026 +0000 UTC m=+173.706007375" observedRunningTime="2025-10-14 13:09:09.073256506 +0000 UTC m=+174.883545835" watchObservedRunningTime="2025-10-14 13:09:09.079012205 +0000 UTC m=+174.889301534" Oct 14 13:09:09.176457 master-1 kubenswrapper[4740]: I1014 13:09:09.176419 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/copy-operator-controller-manifests/0.log" Oct 14 13:09:09.250990 master-1 kubenswrapper[4740]: I1014 13:09:09.250885 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 14 13:09:09.268257 master-1 kubenswrapper[4740]: W1014 13:09:09.268190 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda2409a99_4fb0_44cb_a711_42808935cb31.slice/crio-2d00dc20741eb4cc65635156e3ecea46085aad5078baa5a70140c0afb7a9d40a WatchSource:0}: Error finding container 2d00dc20741eb4cc65635156e3ecea46085aad5078baa5a70140c0afb7a9d40a: Status 404 returned error can't find the container with id 2d00dc20741eb4cc65635156e3ecea46085aad5078baa5a70140c0afb7a9d40a Oct 14 13:09:09.380547 master-1 kubenswrapper[4740]: I1014 13:09:09.380497 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/cluster-olm-operator/1.log" Oct 14 13:09:09.580590 master-1 kubenswrapper[4740]: I1014 13:09:09.580528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-dbtvc_f4f3c22a-c0cd-4727-bfb4-9f92302eb13f/openshift-apiserver-operator/0.log" Oct 14 13:09:10.013909 master-1 kubenswrapper[4740]: I1014 13:09:10.013802 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49h5v" event={"ID":"3c65eca1-8a10-4132-8b45-a9ba45044e18","Type":"ContainerStarted","Data":"9ab0b904bae8dbad78487ef95f27035afb16056907266c9c19f0a50ead9292d1"} Oct 14 13:09:10.015471 master-1 kubenswrapper[4740]: I1014 13:09:10.015423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"a2409a99-4fb0-44cb-a711-42808935cb31","Type":"ContainerStarted","Data":"57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe"} Oct 14 13:09:10.015602 master-1 kubenswrapper[4740]: I1014 13:09:10.015471 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"a2409a99-4fb0-44cb-a711-42808935cb31","Type":"ContainerStarted","Data":"2d00dc20741eb4cc65635156e3ecea46085aad5078baa5a70140c0afb7a9d40a"} Oct 14 13:09:10.034489 master-1 kubenswrapper[4740]: I1014 13:09:10.034349 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-49h5v" podStartSLOduration=2.034314753 podStartE2EDuration="2.034314753s" podCreationTimestamp="2025-10-14 13:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:10.032727869 +0000 UTC m=+175.843017228" watchObservedRunningTime="2025-10-14 13:09:10.034314753 +0000 UTC m=+175.844604092" Oct 14 13:09:10.054749 master-1 kubenswrapper[4740]: I1014 13:09:10.054604 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-1" podStartSLOduration=2.054564502 podStartE2EDuration="2.054564502s" podCreationTimestamp="2025-10-14 13:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:10.046635764 +0000 UTC m=+175.856925093" watchObservedRunningTime="2025-10-14 13:09:10.054564502 +0000 UTC m=+175.864853911" Oct 14 13:09:10.377844 master-1 kubenswrapper[4740]: I1014 13:09:10.377707 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6576f6bc9d-xfzjr_ed68870d-0f75-4bac-8f5e-36016becfd08/fix-audit-permissions/0.log" Oct 14 13:09:10.586139 master-1 kubenswrapper[4740]: I1014 13:09:10.586056 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6576f6bc9d-xfzjr_ed68870d-0f75-4bac-8f5e-36016becfd08/openshift-apiserver/0.log" Oct 14 13:09:10.779710 master-1 kubenswrapper[4740]: I1014 13:09:10.779654 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6576f6bc9d-xfzjr_ed68870d-0f75-4bac-8f5e-36016becfd08/openshift-apiserver-check-endpoints/0.log" Oct 14 13:09:10.986530 master-1 kubenswrapper[4740]: I1014 13:09:10.986452 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/0.log" Oct 14 13:09:11.188282 master-1 kubenswrapper[4740]: I1014 13:09:11.188009 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-5l45t_3a952fbc-3908-4e41-a914-9f63f47252e4/openshift-controller-manager-operator/0.log" Oct 14 13:09:13.047539 master-1 kubenswrapper[4740]: I1014 13:09:13.047432 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" event={"ID":"7be129fe-d04d-4384-a0e9-76b3148a1f3e","Type":"ContainerStarted","Data":"96b2b41f849138a51ed6c80a557c300f25ccfd33fa9d293fc26893f6dca2a127"} Oct 14 13:09:13.048441 master-1 kubenswrapper[4740]: I1014 13:09:13.047680 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:09:13.049420 master-1 kubenswrapper[4740]: I1014 13:09:13.049369 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" event={"ID":"3d292fbb-b49c-4543-993b-738103c7419b","Type":"ContainerStarted","Data":"4320f405b6eddee3c99cc1f1c0018c3d859a45774c455c690dc0bc50eafc5755"} Oct 14 13:09:13.049659 master-1 kubenswrapper[4740]: I1014 13:09:13.049618 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:09:13.051520 master-1 kubenswrapper[4740]: I1014 13:09:13.051478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" event={"ID":"57526e49-7f51-4a66-8f48-0c485fc1e88f","Type":"ContainerStarted","Data":"094996cdbf42b57ea7b10bc14df7f317b9a68ad8d9b35aa5c9ba2bed53d9d647"} Oct 14 13:09:13.052362 master-1 kubenswrapper[4740]: I1014 13:09:13.052309 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:09:13.057312 master-1 kubenswrapper[4740]: I1014 13:09:13.057260 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" Oct 14 13:09:13.061715 master-1 kubenswrapper[4740]: I1014 13:09:13.061661 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" Oct 14 13:09:13.077210 master-1 kubenswrapper[4740]: I1014 13:09:13.077093 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" podStartSLOduration=159.5126068 podStartE2EDuration="2m47.077064605s" podCreationTimestamp="2025-10-14 13:06:26 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.697674161 +0000 UTC m=+170.507963500" lastFinishedPulling="2025-10-14 13:09:12.262131976 +0000 UTC m=+178.072421305" observedRunningTime="2025-10-14 13:09:13.07108172 +0000 UTC m=+178.881371089" watchObservedRunningTime="2025-10-14 13:09:13.077064605 +0000 UTC m=+178.887353974" Oct 14 13:09:13.087904 master-1 kubenswrapper[4740]: I1014 13:09:13.087782 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2" podStartSLOduration=157.404766471 podStartE2EDuration="2m45.087758011s" podCreationTimestamp="2025-10-14 13:06:28 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.534914074 +0000 UTC m=+170.345203403" lastFinishedPulling="2025-10-14 13:09:12.217905614 +0000 UTC m=+178.028194943" observedRunningTime="2025-10-14 13:09:13.087375171 +0000 UTC m=+178.897664550" watchObservedRunningTime="2025-10-14 13:09:13.087758011 +0000 UTC m=+178.898047380" Oct 14 13:09:13.432424 master-1 kubenswrapper[4740]: I1014 13:09:13.431748 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c" podStartSLOduration=158.60555255 podStartE2EDuration="2m46.431724086s" podCreationTimestamp="2025-10-14 13:06:27 +0000 UTC" firstStartedPulling="2025-10-14 13:09:04.532524718 +0000 UTC m=+170.342814057" lastFinishedPulling="2025-10-14 13:09:12.358696224 +0000 UTC m=+178.168985593" observedRunningTime="2025-10-14 13:09:13.109283306 +0000 UTC m=+178.919572665" watchObservedRunningTime="2025-10-14 13:09:13.431724086 +0000 UTC m=+179.242013425" Oct 14 13:09:13.436990 master-1 kubenswrapper[4740]: I1014 13:09:13.434049 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5ddb89f76-xf924"] Oct 14 13:09:13.436990 master-1 kubenswrapper[4740]: I1014 13:09:13.434844 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.442411 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.442665 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.442817 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.443033 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.443347 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.443498 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 14 13:09:13.446053 master-1 kubenswrapper[4740]: I1014 13:09:13.445608 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4"] Oct 14 13:09:13.446603 master-1 kubenswrapper[4740]: I1014 13:09:13.446354 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:13.450153 master-1 kubenswrapper[4740]: I1014 13:09:13.450095 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 14 13:09:13.455668 master-1 kubenswrapper[4740]: I1014 13:09:13.455602 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4"] Oct 14 13:09:13.536069 master-1 kubenswrapper[4740]: I1014 13:09:13.536000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-metrics-certs\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.536069 master-1 kubenswrapper[4740]: I1014 13:09:13.536082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-service-ca-bundle\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.536069 master-1 kubenswrapper[4740]: I1014 13:09:13.536120 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-default-certificate\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.536602 master-1 kubenswrapper[4740]: I1014 13:09:13.536315 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-bg9c4\" (UID: \"405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:13.536602 master-1 kubenswrapper[4740]: I1014 13:09:13.536357 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82pw2\" (UniqueName: \"kubernetes.io/projected/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-kube-api-access-82pw2\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.536602 master-1 kubenswrapper[4740]: I1014 13:09:13.536410 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-stats-auth\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.640196 master-1 kubenswrapper[4740]: I1014 13:09:13.640073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-bg9c4\" (UID: \"405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:13.640196 master-1 kubenswrapper[4740]: I1014 13:09:13.640143 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82pw2\" (UniqueName: \"kubernetes.io/projected/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-kube-api-access-82pw2\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.640196 master-1 kubenswrapper[4740]: I1014 13:09:13.640176 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-stats-auth\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.640806 master-1 kubenswrapper[4740]: I1014 13:09:13.640256 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-metrics-certs\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.640806 master-1 kubenswrapper[4740]: I1014 13:09:13.640295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-default-certificate\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.640806 master-1 kubenswrapper[4740]: I1014 13:09:13.640317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-service-ca-bundle\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.641432 master-1 kubenswrapper[4740]: I1014 13:09:13.641381 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-service-ca-bundle\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.645411 master-1 kubenswrapper[4740]: I1014 13:09:13.645363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-79d5f95f5c-bg9c4\" (UID: \"405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:13.646208 master-1 kubenswrapper[4740]: I1014 13:09:13.646132 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-metrics-certs\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.646534 master-1 kubenswrapper[4740]: I1014 13:09:13.646484 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-default-certificate\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.649289 master-1 kubenswrapper[4740]: I1014 13:09:13.649167 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-stats-auth\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.663967 master-1 kubenswrapper[4740]: I1014 13:09:13.663176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82pw2\" (UniqueName: \"kubernetes.io/projected/b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28-kube-api-access-82pw2\") pod \"router-default-5ddb89f76-xf924\" (UID: \"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28\") " pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.768545 master-1 kubenswrapper[4740]: I1014 13:09:13.768432 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:13.792220 master-1 kubenswrapper[4740]: W1014 13:09:13.792166 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1498c7d_1e0e_4a99_a0a0_bf6e05c7fd28.slice/crio-c14eb97e8959315f24a564b7cb729c39840278d5e462526b25412368ac548572 WatchSource:0}: Error finding container c14eb97e8959315f24a564b7cb729c39840278d5e462526b25412368ac548572: Status 404 returned error can't find the container with id c14eb97e8959315f24a564b7cb729c39840278d5e462526b25412368ac548572 Oct 14 13:09:13.794673 master-1 kubenswrapper[4740]: I1014 13:09:13.794618 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:14.065735 master-1 kubenswrapper[4740]: I1014 13:09:14.065523 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerStarted","Data":"c14eb97e8959315f24a564b7cb729c39840278d5e462526b25412368ac548572"} Oct 14 13:09:14.263771 master-1 kubenswrapper[4740]: I1014 13:09:14.229299 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 14 13:09:14.263771 master-1 kubenswrapper[4740]: I1014 13:09:14.229607 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-1" podUID="0dfce785-d7d6-4abe-816a-ffe7a9ad980f" containerName="installer" containerID="cri-o://cde66cb045d8f62ec0c83968784efb957213310f7caee32b4091d5a6e87d6932" gracePeriod=30 Oct 14 13:09:14.318681 master-1 kubenswrapper[4740]: I1014 13:09:14.318624 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4"] Oct 14 13:09:14.323520 master-1 kubenswrapper[4740]: W1014 13:09:14.323443 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405aee2c_2eac_40f5_aa9e_e9ca6cf5ccd5.slice/crio-d9afbf8e1615bcc0f3a409c231447c4c8042383f0edf12ccd260bfdd2f290a8a WatchSource:0}: Error finding container d9afbf8e1615bcc0f3a409c231447c4c8042383f0edf12ccd260bfdd2f290a8a: Status 404 returned error can't find the container with id d9afbf8e1615bcc0f3a409c231447c4c8042383f0edf12ccd260bfdd2f290a8a Oct 14 13:09:14.474867 master-1 kubenswrapper[4740]: I1014 13:09:14.474781 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp"] Oct 14 13:09:14.475620 master-1 kubenswrapper[4740]: I1014 13:09:14.475583 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.479306 master-1 kubenswrapper[4740]: I1014 13:09:14.479261 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 14 13:09:14.483180 master-1 kubenswrapper[4740]: I1014 13:09:14.483124 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp"] Oct 14 13:09:14.669908 master-1 kubenswrapper[4740]: I1014 13:09:14.669851 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38e3dcc6-46a2-4bdd-883d-d113945b0703-webhook-cert\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.669908 master-1 kubenswrapper[4740]: I1014 13:09:14.669915 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38e3dcc6-46a2-4bdd-883d-d113945b0703-apiservice-cert\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.670153 master-1 kubenswrapper[4740]: I1014 13:09:14.669946 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26f9n\" (UniqueName: \"kubernetes.io/projected/38e3dcc6-46a2-4bdd-883d-d113945b0703-kube-api-access-26f9n\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.670153 master-1 kubenswrapper[4740]: I1014 13:09:14.670027 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/38e3dcc6-46a2-4bdd-883d-d113945b0703-tmpfs\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.772427 master-1 kubenswrapper[4740]: I1014 13:09:14.772333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38e3dcc6-46a2-4bdd-883d-d113945b0703-webhook-cert\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.772427 master-1 kubenswrapper[4740]: I1014 13:09:14.772414 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38e3dcc6-46a2-4bdd-883d-d113945b0703-apiservice-cert\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.772769 master-1 kubenswrapper[4740]: I1014 13:09:14.772457 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26f9n\" (UniqueName: \"kubernetes.io/projected/38e3dcc6-46a2-4bdd-883d-d113945b0703-kube-api-access-26f9n\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.772769 master-1 kubenswrapper[4740]: I1014 13:09:14.772563 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/38e3dcc6-46a2-4bdd-883d-d113945b0703-tmpfs\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.773580 master-1 kubenswrapper[4740]: I1014 13:09:14.773517 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/38e3dcc6-46a2-4bdd-883d-d113945b0703-tmpfs\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.777025 master-1 kubenswrapper[4740]: I1014 13:09:14.776969 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38e3dcc6-46a2-4bdd-883d-d113945b0703-apiservice-cert\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.777618 master-1 kubenswrapper[4740]: I1014 13:09:14.777553 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38e3dcc6-46a2-4bdd-883d-d113945b0703-webhook-cert\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:14.803366 master-1 kubenswrapper[4740]: I1014 13:09:14.803286 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26f9n\" (UniqueName: \"kubernetes.io/projected/38e3dcc6-46a2-4bdd-883d-d113945b0703-kube-api-access-26f9n\") pod \"packageserver-6f5778dccb-kwxxp\" (UID: \"38e3dcc6-46a2-4bdd-883d-d113945b0703\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:15.074095 master-1 kubenswrapper[4740]: I1014 13:09:15.074021 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" event={"ID":"405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5","Type":"ContainerStarted","Data":"d9afbf8e1615bcc0f3a409c231447c4c8042383f0edf12ccd260bfdd2f290a8a"} Oct 14 13:09:15.076758 master-1 kubenswrapper[4740]: I1014 13:09:15.076708 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-1_0dfce785-d7d6-4abe-816a-ffe7a9ad980f/installer/0.log" Oct 14 13:09:15.076882 master-1 kubenswrapper[4740]: I1014 13:09:15.076771 4740 generic.go:334] "Generic (PLEG): container finished" podID="0dfce785-d7d6-4abe-816a-ffe7a9ad980f" containerID="cde66cb045d8f62ec0c83968784efb957213310f7caee32b4091d5a6e87d6932" exitCode=1 Oct 14 13:09:15.077020 master-1 kubenswrapper[4740]: I1014 13:09:15.076950 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"0dfce785-d7d6-4abe-816a-ffe7a9ad980f","Type":"ContainerDied","Data":"cde66cb045d8f62ec0c83968784efb957213310f7caee32b4091d5a6e87d6932"} Oct 14 13:09:15.092979 master-1 kubenswrapper[4740]: I1014 13:09:15.092894 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:15.548518 master-1 kubenswrapper[4740]: I1014 13:09:15.548472 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-1_0dfce785-d7d6-4abe-816a-ffe7a9ad980f/installer/0.log" Oct 14 13:09:15.548676 master-1 kubenswrapper[4740]: I1014 13:09:15.548563 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:15.604421 master-1 kubenswrapper[4740]: I1014 13:09:15.604347 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp"] Oct 14 13:09:15.695982 master-1 kubenswrapper[4740]: I1014 13:09:15.695890 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-var-lock\") pod \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " Oct 14 13:09:15.695982 master-1 kubenswrapper[4740]: I1014 13:09:15.695967 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kubelet-dir\") pod \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " Oct 14 13:09:15.696307 master-1 kubenswrapper[4740]: I1014 13:09:15.696045 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kube-api-access\") pod \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\" (UID: \"0dfce785-d7d6-4abe-816a-ffe7a9ad980f\") " Oct 14 13:09:15.696564 master-1 kubenswrapper[4740]: I1014 13:09:15.696509 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-var-lock" (OuterVolumeSpecName: "var-lock") pod "0dfce785-d7d6-4abe-816a-ffe7a9ad980f" (UID: "0dfce785-d7d6-4abe-816a-ffe7a9ad980f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:15.696797 master-1 kubenswrapper[4740]: I1014 13:09:15.696720 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0dfce785-d7d6-4abe-816a-ffe7a9ad980f" (UID: "0dfce785-d7d6-4abe-816a-ffe7a9ad980f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:15.701024 master-1 kubenswrapper[4740]: I1014 13:09:15.700956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0dfce785-d7d6-4abe-816a-ffe7a9ad980f" (UID: "0dfce785-d7d6-4abe-816a-ffe7a9ad980f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:09:15.798350 master-1 kubenswrapper[4740]: I1014 13:09:15.798285 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:15.798350 master-1 kubenswrapper[4740]: I1014 13:09:15.798333 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:15.798350 master-1 kubenswrapper[4740]: I1014 13:09:15.798350 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0dfce785-d7d6-4abe-816a-ffe7a9ad980f-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:16.084049 master-1 kubenswrapper[4740]: I1014 13:09:16.083982 4740 generic.go:334] "Generic (PLEG): container finished" podID="772f8774-25f4-4987-bd40-8f3adda97e8b" containerID="f5832d56e3fcd22df22a6eedf838f45d8d3192cad36fc782deb89ade5a630fbb" exitCode=0 Oct 14 13:09:16.084850 master-1 kubenswrapper[4740]: I1014 13:09:16.084074 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" event={"ID":"772f8774-25f4-4987-bd40-8f3adda97e8b","Type":"ContainerDied","Data":"f5832d56e3fcd22df22a6eedf838f45d8d3192cad36fc782deb89ade5a630fbb"} Oct 14 13:09:16.084850 master-1 kubenswrapper[4740]: I1014 13:09:16.084580 4740 scope.go:117] "RemoveContainer" containerID="f5832d56e3fcd22df22a6eedf838f45d8d3192cad36fc782deb89ade5a630fbb" Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.086378 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-1_0dfce785-d7d6-4abe-816a-ffe7a9ad980f/installer/0.log" Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.086544 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-1" Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.089110 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-1" event={"ID":"0dfce785-d7d6-4abe-816a-ffe7a9ad980f","Type":"ContainerDied","Data":"3d649bea2321b015d028ef86754ea907b27028924d1e9793f0522fc54b6a6f25"} Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.089153 4740 scope.go:117] "RemoveContainer" containerID="cde66cb045d8f62ec0c83968784efb957213310f7caee32b4091d5a6e87d6932" Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.090248 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fa5c762-a739-4cf4-929c-573bc5494b81" containerID="008d1108c66a56e8ed16a8017d28e4157ac29ff463d22610838bd2fe665ea8cb" exitCode=0 Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.090293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" event={"ID":"2fa5c762-a739-4cf4-929c-573bc5494b81","Type":"ContainerDied","Data":"008d1108c66a56e8ed16a8017d28e4157ac29ff463d22610838bd2fe665ea8cb"} Oct 14 13:09:16.091647 master-1 kubenswrapper[4740]: I1014 13:09:16.090754 4740 scope.go:117] "RemoveContainer" containerID="008d1108c66a56e8ed16a8017d28e4157ac29ff463d22610838bd2fe665ea8cb" Oct 14 13:09:16.098024 master-1 kubenswrapper[4740]: I1014 13:09:16.097954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" event={"ID":"38e3dcc6-46a2-4bdd-883d-d113945b0703","Type":"ContainerStarted","Data":"9723dde06ccb3f623152f870e661fc70c46d65b013696b26bb14a4f8240465c0"} Oct 14 13:09:16.141003 master-1 kubenswrapper[4740]: I1014 13:09:16.140928 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 14 13:09:16.145911 master-1 kubenswrapper[4740]: I1014 13:09:16.145853 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-1"] Oct 14 13:09:16.429738 master-1 kubenswrapper[4740]: I1014 13:09:16.429649 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-1"] Oct 14 13:09:16.430052 master-1 kubenswrapper[4740]: E1014 13:09:16.430011 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfce785-d7d6-4abe-816a-ffe7a9ad980f" containerName="installer" Oct 14 13:09:16.430052 master-1 kubenswrapper[4740]: I1014 13:09:16.430035 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfce785-d7d6-4abe-816a-ffe7a9ad980f" containerName="installer" Oct 14 13:09:16.430281 master-1 kubenswrapper[4740]: I1014 13:09:16.430195 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfce785-d7d6-4abe-816a-ffe7a9ad980f" containerName="installer" Oct 14 13:09:16.430781 master-1 kubenswrapper[4740]: I1014 13:09:16.430725 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.435932 master-1 kubenswrapper[4740]: I1014 13:09:16.435860 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-1"] Oct 14 13:09:16.615416 master-1 kubenswrapper[4740]: I1014 13:09:16.610072 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-var-lock\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.615416 master-1 kubenswrapper[4740]: I1014 13:09:16.610195 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.615416 master-1 kubenswrapper[4740]: I1014 13:09:16.610287 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.711303 master-1 kubenswrapper[4740]: I1014 13:09:16.711212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-var-lock\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.711303 master-1 kubenswrapper[4740]: I1014 13:09:16.711299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.711590 master-1 kubenswrapper[4740]: I1014 13:09:16.711359 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.711590 master-1 kubenswrapper[4740]: I1014 13:09:16.711416 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-var-lock\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.711590 master-1 kubenswrapper[4740]: I1014 13:09:16.711500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.734351 master-1 kubenswrapper[4740]: I1014 13:09:16.733203 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kube-api-access\") pod \"installer-4-master-1\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.757728 master-1 kubenswrapper[4740]: I1014 13:09:16.757624 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:09:16.832855 master-1 kubenswrapper[4740]: I1014 13:09:16.830877 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-b6pv4"] Oct 14 13:09:16.833324 master-1 kubenswrapper[4740]: I1014 13:09:16.833288 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:16.875873 master-1 kubenswrapper[4740]: I1014 13:09:16.875817 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 14 13:09:16.876207 master-1 kubenswrapper[4740]: I1014 13:09:16.876155 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 14 13:09:16.913928 master-1 kubenswrapper[4740]: I1014 13:09:16.913803 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-node-bootstrap-token\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:16.913928 master-1 kubenswrapper[4740]: I1014 13:09:16.913869 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qdt\" (UniqueName: \"kubernetes.io/projected/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-kube-api-access-z2qdt\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:16.914161 master-1 kubenswrapper[4740]: I1014 13:09:16.913950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-certs\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:16.914161 master-1 kubenswrapper[4740]: I1014 13:09:16.913990 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:09:16.914288 master-1 kubenswrapper[4740]: E1014 13:09:16.914210 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:09:16.914357 master-1 kubenswrapper[4740]: E1014 13:09:16.914326 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:10:20.914304483 +0000 UTC m=+246.724593822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:09:16.952116 master-1 kubenswrapper[4740]: I1014 13:09:16.952003 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfce785-d7d6-4abe-816a-ffe7a9ad980f" path="/var/lib/kubelet/pods/0dfce785-d7d6-4abe-816a-ffe7a9ad980f/volumes" Oct 14 13:09:17.015460 master-1 kubenswrapper[4740]: I1014 13:09:17.014993 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-node-bootstrap-token\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.015460 master-1 kubenswrapper[4740]: I1014 13:09:17.015049 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2qdt\" (UniqueName: \"kubernetes.io/projected/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-kube-api-access-z2qdt\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.015460 master-1 kubenswrapper[4740]: I1014 13:09:17.015095 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-certs\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.028372 master-1 kubenswrapper[4740]: I1014 13:09:17.021388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-certs\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.028372 master-1 kubenswrapper[4740]: I1014 13:09:17.025111 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-node-bootstrap-token\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.050913 master-1 kubenswrapper[4740]: I1014 13:09:17.050855 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2qdt\" (UniqueName: \"kubernetes.io/projected/f4b808ea-786b-4ff6-a7e8-73b0c9ac8157-kube-api-access-z2qdt\") pod \"machine-config-server-b6pv4\" (UID: \"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157\") " pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.106089 master-1 kubenswrapper[4740]: I1014 13:09:17.106000 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q" event={"ID":"2fa5c762-a739-4cf4-929c-573bc5494b81","Type":"ContainerStarted","Data":"3a52ad87c12b2cebadca3c6d8e41252ea3bf98858859e5b94fbed13528a7b268"} Oct 14 13:09:17.108390 master-1 kubenswrapper[4740]: I1014 13:09:17.108343 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" event={"ID":"38e3dcc6-46a2-4bdd-883d-d113945b0703","Type":"ContainerStarted","Data":"8d5ed466316ee154dac4dfb16e9755ee6c83d1f2a8340492537f91ac52b28b12"} Oct 14 13:09:17.108696 master-1 kubenswrapper[4740]: I1014 13:09:17.108658 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:17.116054 master-1 kubenswrapper[4740]: I1014 13:09:17.115995 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" Oct 14 13:09:17.118359 master-1 kubenswrapper[4740]: I1014 13:09:17.118299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt" event={"ID":"772f8774-25f4-4987-bd40-8f3adda97e8b","Type":"ContainerStarted","Data":"5d3405e674b39fe5c383c84ed472560a466322648c30d0f4130ebd9ef2f06d70"} Oct 14 13:09:17.125697 master-1 kubenswrapper[4740]: I1014 13:09:17.125613 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp" podStartSLOduration=3.125592292 podStartE2EDuration="3.125592292s" podCreationTimestamp="2025-10-14 13:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:17.122902078 +0000 UTC m=+182.933191447" watchObservedRunningTime="2025-10-14 13:09:17.125592292 +0000 UTC m=+182.935881661" Oct 14 13:09:17.155142 master-1 kubenswrapper[4740]: I1014 13:09:17.154816 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-b6pv4" Oct 14 13:09:17.236991 master-1 kubenswrapper[4740]: I1014 13:09:17.228013 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-1"] Oct 14 13:09:17.736348 master-1 kubenswrapper[4740]: I1014 13:09:17.730316 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:09:17.736348 master-1 kubenswrapper[4740]: I1014 13:09:17.733311 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.825433 master-1 kubenswrapper[4740]: I1014 13:09:17.825349 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:09:17.826970 master-1 kubenswrapper[4740]: I1014 13:09:17.826904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.827042 master-1 kubenswrapper[4740]: I1014 13:09:17.826969 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.827042 master-1 kubenswrapper[4740]: I1014 13:09:17.827007 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.827042 master-1 kubenswrapper[4740]: I1014 13:09:17.827034 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.827189 master-1 kubenswrapper[4740]: I1014 13:09:17.827064 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.827189 master-1 kubenswrapper[4740]: I1014 13:09:17.827184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:09:17.827338 master-1 kubenswrapper[4740]: I1014 13:09:17.827216 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.832243 master-1 kubenswrapper[4740]: I1014 13:09:17.832179 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1-metrics-certs\") pod \"network-metrics-daemon-8l654\" (UID: \"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1\") " pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:09:17.928815 master-1 kubenswrapper[4740]: I1014 13:09:17.928730 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.928815 master-1 kubenswrapper[4740]: I1014 13:09:17.928810 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.928981 master-1 kubenswrapper[4740]: I1014 13:09:17.928842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.928981 master-1 kubenswrapper[4740]: I1014 13:09:17.928861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.928981 master-1 kubenswrapper[4740]: I1014 13:09:17.928894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.928981 master-1 kubenswrapper[4740]: I1014 13:09:17.928953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.928981 master-1 kubenswrapper[4740]: I1014 13:09:17.928934 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.929179 master-1 kubenswrapper[4740]: I1014 13:09:17.929005 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.929179 master-1 kubenswrapper[4740]: I1014 13:09:17.929057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.929179 master-1 kubenswrapper[4740]: I1014 13:09:17.929072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.929179 master-1 kubenswrapper[4740]: I1014 13:09:17.929084 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:17.929179 master-1 kubenswrapper[4740]: I1014 13:09:17.929114 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"etcd-master-1\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:09:18.011074 master-1 kubenswrapper[4740]: I1014 13:09:18.010957 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8l654" Oct 14 13:09:18.126302 master-1 kubenswrapper[4740]: I1014 13:09:18.124943 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:18.128507 master-1 kubenswrapper[4740]: I1014 13:09:18.127601 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-b6pv4" event={"ID":"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157","Type":"ContainerStarted","Data":"035415dbdbf4e294523bae7dc5b7ab4c1a44a6464cf12f4d2ed9d2c20a7e1177"} Oct 14 13:09:18.134976 master-1 kubenswrapper[4740]: I1014 13:09:18.134896 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c","Type":"ContainerStarted","Data":"22e847e0bef1c56671d5e1c4a1b3dfb603b1291e9f6aafc10706bc8255ac0942"} Oct 14 13:09:18.484575 master-1 kubenswrapper[4740]: I1014 13:09:18.484502 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8l654"] Oct 14 13:09:19.144673 master-1 kubenswrapper[4740]: I1014 13:09:19.144605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"54b6416dbd62fd1f307abc240b1ce660c8da13b08dfb78c302cb6c5689b34f4d"} Oct 14 13:09:19.146501 master-1 kubenswrapper[4740]: I1014 13:09:19.146423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8l654" event={"ID":"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1","Type":"ContainerStarted","Data":"422a5026fba31a05a8c86e8a2863c9cd378a1678450fd7fcfe130d6fe51e1725"} Oct 14 13:09:20.469687 master-1 kubenswrapper[4740]: I1014 13:09:20.469533 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:09:20.470696 master-1 kubenswrapper[4740]: E1014 13:09:20.469905 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:09:20.470696 master-1 kubenswrapper[4740]: E1014 13:09:20.470028 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:10:24.46999553 +0000 UTC m=+250.280284899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:09:21.161439 master-1 kubenswrapper[4740]: I1014 13:09:21.161297 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-b6pv4" event={"ID":"f4b808ea-786b-4ff6-a7e8-73b0c9ac8157","Type":"ContainerStarted","Data":"5519e3bd499bed96072b618dccfff294f82a1a91f01df6c7f88964229b0b8ffb"} Oct 14 13:09:22.172543 master-1 kubenswrapper[4740]: I1014 13:09:22.172330 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-1_b1c6b650-cfb9-4098-8d7b-43e9735daa7e/installer/0.log" Oct 14 13:09:22.173419 master-1 kubenswrapper[4740]: I1014 13:09:22.172510 4740 generic.go:334] "Generic (PLEG): container finished" podID="b1c6b650-cfb9-4098-8d7b-43e9735daa7e" containerID="8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3" exitCode=1 Oct 14 13:09:22.173419 master-1 kubenswrapper[4740]: I1014 13:09:22.172599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"b1c6b650-cfb9-4098-8d7b-43e9735daa7e","Type":"ContainerDied","Data":"8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3"} Oct 14 13:09:22.175031 master-1 kubenswrapper[4740]: I1014 13:09:22.174976 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c","Type":"ContainerStarted","Data":"eba6b60c89b0f2f1ee7e61ff4b6a123bde8c78c2f149a70b77fe188ea35718fc"} Oct 14 13:09:22.177697 master-1 kubenswrapper[4740]: I1014 13:09:22.177651 4740 generic.go:334] "Generic (PLEG): container finished" podID="b61b7a8e-e2be-4f11-a659-1919213dda51" containerID="9f41636be726016072c28ea80b0c3486ab89141361a1377e8eeffd48959d0e15" exitCode=0 Oct 14 13:09:22.177697 master-1 kubenswrapper[4740]: I1014 13:09:22.177695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"b61b7a8e-e2be-4f11-a659-1919213dda51","Type":"ContainerDied","Data":"9f41636be726016072c28ea80b0c3486ab89141361a1377e8eeffd48959d0e15"} Oct 14 13:09:23.201336 master-1 kubenswrapper[4740]: I1014 13:09:23.201249 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-1" podStartSLOduration=7.201201664 podStartE2EDuration="7.201201664s" podCreationTimestamp="2025-10-14 13:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:23.199484787 +0000 UTC m=+189.009774206" watchObservedRunningTime="2025-10-14 13:09:23.201201664 +0000 UTC m=+189.011491023" Oct 14 13:09:23.217742 master-1 kubenswrapper[4740]: I1014 13:09:23.217631 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-b6pv4" podStartSLOduration=7.217601728 podStartE2EDuration="7.217601728s" podCreationTimestamp="2025-10-14 13:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:23.215756346 +0000 UTC m=+189.026045765" watchObservedRunningTime="2025-10-14 13:09:23.217601728 +0000 UTC m=+189.027891087" Oct 14 13:09:23.567883 master-1 kubenswrapper[4740]: E1014 13:09:23.567798 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podb61b7a8e_e2be_4f11_a659_1919213dda51.slice/crio-conmon-9f41636be726016072c28ea80b0c3486ab89141361a1377e8eeffd48959d0e15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod8dddfa29_2bde_416f_870d_c24a4c6c67db.slice/crio-981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podb1c6b650_cfb9_4098_8d7b_43e9735daa7e.slice/crio-8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod8dddfa29_2bde_416f_870d_c24a4c6c67db.slice/crio-conmon-981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podb1c6b650_cfb9_4098_8d7b_43e9735daa7e.slice/crio-conmon-8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podb61b7a8e_e2be_4f11_a659_1919213dda51.slice/crio-9f41636be726016072c28ea80b0c3486ab89141361a1377e8eeffd48959d0e15.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:09:23.970723 master-1 kubenswrapper[4740]: I1014 13:09:23.970489 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 14 13:09:23.972205 master-1 kubenswrapper[4740]: I1014 13:09:23.971928 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:09:23.974845 master-1 kubenswrapper[4740]: I1014 13:09:23.974797 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"openshift-service-ca.crt" Oct 14 13:09:23.976812 master-1 kubenswrapper[4740]: I1014 13:09:23.976767 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 14 13:09:24.128072 master-1 kubenswrapper[4740]: I1014 13:09:24.127989 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vd6m\" (UniqueName: \"kubernetes.io/projected/e4b81afc-7eb3-4303-91f8-593c130da282-kube-api-access-8vd6m\") pod \"etcd-guard-master-1\" (UID: \"e4b81afc-7eb3-4303-91f8-593c130da282\") " pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:09:24.191423 master-1 kubenswrapper[4740]: I1014 13:09:24.191356 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_8dddfa29-2bde-416f-870d-c24a4c6c67db/installer/0.log" Oct 14 13:09:24.191792 master-1 kubenswrapper[4740]: I1014 13:09:24.191441 4740 generic.go:334] "Generic (PLEG): container finished" podID="8dddfa29-2bde-416f-870d-c24a4c6c67db" containerID="981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d" exitCode=1 Oct 14 13:09:24.191792 master-1 kubenswrapper[4740]: I1014 13:09:24.191491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"8dddfa29-2bde-416f-870d-c24a4c6c67db","Type":"ContainerDied","Data":"981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d"} Oct 14 13:09:24.230051 master-1 kubenswrapper[4740]: I1014 13:09:24.229857 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vd6m\" (UniqueName: \"kubernetes.io/projected/e4b81afc-7eb3-4303-91f8-593c130da282-kube-api-access-8vd6m\") pod \"etcd-guard-master-1\" (UID: \"e4b81afc-7eb3-4303-91f8-593c130da282\") " pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:09:24.255896 master-1 kubenswrapper[4740]: I1014 13:09:24.255809 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vd6m\" (UniqueName: \"kubernetes.io/projected/e4b81afc-7eb3-4303-91f8-593c130da282-kube-api-access-8vd6m\") pod \"etcd-guard-master-1\" (UID: \"e4b81afc-7eb3-4303-91f8-593c130da282\") " pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:09:24.302244 master-1 kubenswrapper[4740]: I1014 13:09:24.302148 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:09:26.274203 master-1 kubenswrapper[4740]: I1014 13:09:26.274084 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-1_b1c6b650-cfb9-4098-8d7b-43e9735daa7e/installer/0.log" Oct 14 13:09:26.274203 master-1 kubenswrapper[4740]: I1014 13:09:26.274191 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:09:26.282782 master-1 kubenswrapper[4740]: I1014 13:09:26.282715 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 14 13:09:26.363592 master-1 kubenswrapper[4740]: I1014 13:09:26.363484 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kube-api-access\") pod \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " Oct 14 13:09:26.363592 master-1 kubenswrapper[4740]: I1014 13:09:26.363589 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-var-lock\") pod \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363634 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-var-lock\") pod \"b61b7a8e-e2be-4f11-a659-1919213dda51\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363718 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b1c6b650-cfb9-4098-8d7b-43e9735daa7e" (UID: "b1c6b650-cfb9-4098-8d7b-43e9735daa7e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363747 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-kubelet-dir\") pod \"b61b7a8e-e2be-4f11-a659-1919213dda51\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kubelet-dir\") pod \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\" (UID: \"b1c6b650-cfb9-4098-8d7b-43e9735daa7e\") " Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363792 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-var-lock" (OuterVolumeSpecName: "var-lock") pod "b61b7a8e-e2be-4f11-a659-1919213dda51" (UID: "b61b7a8e-e2be-4f11-a659-1919213dda51"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363820 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b61b7a8e-e2be-4f11-a659-1919213dda51" (UID: "b61b7a8e-e2be-4f11-a659-1919213dda51"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363835 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b61b7a8e-e2be-4f11-a659-1919213dda51-kube-api-access\") pod \"b61b7a8e-e2be-4f11-a659-1919213dda51\" (UID: \"b61b7a8e-e2be-4f11-a659-1919213dda51\") " Oct 14 13:09:26.363975 master-1 kubenswrapper[4740]: I1014 13:09:26.363843 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1c6b650-cfb9-4098-8d7b-43e9735daa7e" (UID: "b1c6b650-cfb9-4098-8d7b-43e9735daa7e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:26.364760 master-1 kubenswrapper[4740]: I1014 13:09:26.364195 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:26.364760 master-1 kubenswrapper[4740]: I1014 13:09:26.364218 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:26.364760 master-1 kubenswrapper[4740]: I1014 13:09:26.364270 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b61b7a8e-e2be-4f11-a659-1919213dda51-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:26.364760 master-1 kubenswrapper[4740]: I1014 13:09:26.364292 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:26.367729 master-1 kubenswrapper[4740]: I1014 13:09:26.367633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1c6b650-cfb9-4098-8d7b-43e9735daa7e" (UID: "b1c6b650-cfb9-4098-8d7b-43e9735daa7e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:09:26.369694 master-1 kubenswrapper[4740]: I1014 13:09:26.369642 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61b7a8e-e2be-4f11-a659-1919213dda51-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b61b7a8e-e2be-4f11-a659-1919213dda51" (UID: "b61b7a8e-e2be-4f11-a659-1919213dda51"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:09:26.466449 master-1 kubenswrapper[4740]: I1014 13:09:26.466250 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b61b7a8e-e2be-4f11-a659-1919213dda51-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:26.466449 master-1 kubenswrapper[4740]: I1014 13:09:26.466305 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1c6b650-cfb9-4098-8d7b-43e9735daa7e-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:27.210679 master-1 kubenswrapper[4740]: I1014 13:09:27.210611 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-1" event={"ID":"b61b7a8e-e2be-4f11-a659-1919213dda51","Type":"ContainerDied","Data":"83cc22825e56988eb9e23b29a138bc79b0bfe6feac31dee8186d5737473dd1cf"} Oct 14 13:09:27.210679 master-1 kubenswrapper[4740]: I1014 13:09:27.210671 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83cc22825e56988eb9e23b29a138bc79b0bfe6feac31dee8186d5737473dd1cf" Oct 14 13:09:27.211136 master-1 kubenswrapper[4740]: I1014 13:09:27.210697 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-1" Oct 14 13:09:27.213556 master-1 kubenswrapper[4740]: I1014 13:09:27.213523 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-1_b1c6b650-cfb9-4098-8d7b-43e9735daa7e/installer/0.log" Oct 14 13:09:27.213792 master-1 kubenswrapper[4740]: I1014 13:09:27.213756 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-1" event={"ID":"b1c6b650-cfb9-4098-8d7b-43e9735daa7e","Type":"ContainerDied","Data":"d4d976ea506873910ec98617359e213fac97d298e1d48f1c567934a8120e8b4e"} Oct 14 13:09:27.214045 master-1 kubenswrapper[4740]: I1014 13:09:27.213853 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-1" Oct 14 13:09:27.214191 master-1 kubenswrapper[4740]: I1014 13:09:27.213996 4740 scope.go:117] "RemoveContainer" containerID="8a9f408f98b36e1ea4133bf7b4f42ed68e1dd2a435ba0712bbcd80ab5ee422e3" Oct 14 13:09:27.235756 master-1 kubenswrapper[4740]: I1014 13:09:27.235695 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 14 13:09:27.240902 master-1 kubenswrapper[4740]: I1014 13:09:27.240844 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-1"] Oct 14 13:09:28.213535 master-1 kubenswrapper[4740]: I1014 13:09:28.213438 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 14 13:09:28.214620 master-1 kubenswrapper[4740]: I1014 13:09:28.213709 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-1" podUID="a2409a99-4fb0-44cb-a711-42808935cb31" containerName="installer" containerID="cri-o://57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe" gracePeriod=30 Oct 14 13:09:28.653752 master-1 kubenswrapper[4740]: I1014 13:09:28.653704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_8dddfa29-2bde-416f-870d-c24a4c6c67db/installer/0.log" Oct 14 13:09:28.653939 master-1 kubenswrapper[4740]: I1014 13:09:28.653810 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:09:28.769867 master-1 kubenswrapper[4740]: I1014 13:09:28.769771 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 14 13:09:28.798479 master-1 kubenswrapper[4740]: I1014 13:09:28.798433 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-var-lock\") pod \"8dddfa29-2bde-416f-870d-c24a4c6c67db\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " Oct 14 13:09:28.798686 master-1 kubenswrapper[4740]: I1014 13:09:28.798510 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-kubelet-dir\") pod \"8dddfa29-2bde-416f-870d-c24a4c6c67db\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " Oct 14 13:09:28.798686 master-1 kubenswrapper[4740]: I1014 13:09:28.798583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dddfa29-2bde-416f-870d-c24a4c6c67db-kube-api-access\") pod \"8dddfa29-2bde-416f-870d-c24a4c6c67db\" (UID: \"8dddfa29-2bde-416f-870d-c24a4c6c67db\") " Oct 14 13:09:28.799217 master-1 kubenswrapper[4740]: I1014 13:09:28.799178 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8dddfa29-2bde-416f-870d-c24a4c6c67db" (UID: "8dddfa29-2bde-416f-870d-c24a4c6c67db"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:28.799428 master-1 kubenswrapper[4740]: I1014 13:09:28.799198 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-var-lock" (OuterVolumeSpecName: "var-lock") pod "8dddfa29-2bde-416f-870d-c24a4c6c67db" (UID: "8dddfa29-2bde-416f-870d-c24a4c6c67db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:28.803674 master-1 kubenswrapper[4740]: I1014 13:09:28.803634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dddfa29-2bde-416f-870d-c24a4c6c67db-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8dddfa29-2bde-416f-870d-c24a4c6c67db" (UID: "8dddfa29-2bde-416f-870d-c24a4c6c67db"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:09:28.900306 master-1 kubenswrapper[4740]: I1014 13:09:28.900248 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8dddfa29-2bde-416f-870d-c24a4c6c67db-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:28.900306 master-1 kubenswrapper[4740]: I1014 13:09:28.900282 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:28.900306 master-1 kubenswrapper[4740]: I1014 13:09:28.900291 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8dddfa29-2bde-416f-870d-c24a4c6c67db-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:28.949757 master-1 kubenswrapper[4740]: I1014 13:09:28.949712 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c6b650-cfb9-4098-8d7b-43e9735daa7e" path="/var/lib/kubelet/pods/b1c6b650-cfb9-4098-8d7b-43e9735daa7e/volumes" Oct 14 13:09:29.133793 master-1 kubenswrapper[4740]: I1014 13:09:29.133762 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-1_a2409a99-4fb0-44cb-a711-42808935cb31/installer/0.log" Oct 14 13:09:29.133872 master-1 kubenswrapper[4740]: I1014 13:09:29.133824 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:29.245830 master-1 kubenswrapper[4740]: I1014 13:09:29.245786 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-1_a2409a99-4fb0-44cb-a711-42808935cb31/installer/0.log" Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.245860 4740 generic.go:334] "Generic (PLEG): container finished" podID="a2409a99-4fb0-44cb-a711-42808935cb31" containerID="57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe" exitCode=1 Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.246008 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"a2409a99-4fb0-44cb-a711-42808935cb31","Type":"ContainerDied","Data":"57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe"} Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.246186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-1" event={"ID":"a2409a99-4fb0-44cb-a711-42808935cb31","Type":"ContainerDied","Data":"2d00dc20741eb4cc65635156e3ecea46085aad5078baa5a70140c0afb7a9d40a"} Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.246365 4740 scope.go:117] "RemoveContainer" containerID="57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe" Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.246564 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-1" Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.251207 4740 generic.go:334] "Generic (PLEG): container finished" podID="24d7cccd-3100-4c4f-9303-fc57993b465e" containerID="f9c246644b612436343a8707c550c7c44e4b0bad27bf2f5a48fa4db7fd206e5e" exitCode=0 Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.251306 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" event={"ID":"24d7cccd-3100-4c4f-9303-fc57993b465e","Type":"ContainerDied","Data":"f9c246644b612436343a8707c550c7c44e4b0bad27bf2f5a48fa4db7fd206e5e"} Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.251887 4740 scope.go:117] "RemoveContainer" containerID="f9c246644b612436343a8707c550c7c44e4b0bad27bf2f5a48fa4db7fd206e5e" Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.254811 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-1_8dddfa29-2bde-416f-870d-c24a4c6c67db/installer/0.log" Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.255090 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-1" Oct 14 13:09:29.260599 master-1 kubenswrapper[4740]: I1014 13:09:29.255125 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-1" event={"ID":"8dddfa29-2bde-416f-870d-c24a4c6c67db","Type":"ContainerDied","Data":"dcac79a41e252093856baafe6af533c786dff50094089580f33f9266280a4f91"} Oct 14 13:09:29.276566 master-1 kubenswrapper[4740]: I1014 13:09:29.276520 4740 scope.go:117] "RemoveContainer" containerID="57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe" Oct 14 13:09:29.277294 master-1 kubenswrapper[4740]: E1014 13:09:29.277187 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe\": container with ID starting with 57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe not found: ID does not exist" containerID="57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe" Oct 14 13:09:29.277366 master-1 kubenswrapper[4740]: I1014 13:09:29.277298 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe"} err="failed to get container status \"57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe\": rpc error: code = NotFound desc = could not find container \"57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe\": container with ID starting with 57dc24273e0d0a408e64cfa7617ff1dacb53cee6131c2b6d624603b00c89cbbe not found: ID does not exist" Oct 14 13:09:29.277366 master-1 kubenswrapper[4740]: I1014 13:09:29.277346 4740 scope.go:117] "RemoveContainer" containerID="981741f7052478875c13c55a55203ce953f2bf65a91b6409d8b46febf48e712d" Oct 14 13:09:29.285861 master-1 kubenswrapper[4740]: I1014 13:09:29.285813 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 14 13:09:29.289620 master-1 kubenswrapper[4740]: I1014 13:09:29.289555 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-1"] Oct 14 13:09:29.304211 master-1 kubenswrapper[4740]: I1014 13:09:29.304184 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2409a99-4fb0-44cb-a711-42808935cb31-kube-api-access\") pod \"a2409a99-4fb0-44cb-a711-42808935cb31\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " Oct 14 13:09:29.304359 master-1 kubenswrapper[4740]: I1014 13:09:29.304243 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-var-lock\") pod \"a2409a99-4fb0-44cb-a711-42808935cb31\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " Oct 14 13:09:29.304359 master-1 kubenswrapper[4740]: I1014 13:09:29.304354 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-kubelet-dir\") pod \"a2409a99-4fb0-44cb-a711-42808935cb31\" (UID: \"a2409a99-4fb0-44cb-a711-42808935cb31\") " Oct 14 13:09:29.304671 master-1 kubenswrapper[4740]: I1014 13:09:29.304639 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a2409a99-4fb0-44cb-a711-42808935cb31" (UID: "a2409a99-4fb0-44cb-a711-42808935cb31"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:29.305244 master-1 kubenswrapper[4740]: I1014 13:09:29.305186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-var-lock" (OuterVolumeSpecName: "var-lock") pod "a2409a99-4fb0-44cb-a711-42808935cb31" (UID: "a2409a99-4fb0-44cb-a711-42808935cb31"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:09:29.407421 master-1 kubenswrapper[4740]: I1014 13:09:29.407278 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:29.407421 master-1 kubenswrapper[4740]: I1014 13:09:29.407325 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a2409a99-4fb0-44cb-a711-42808935cb31-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:29.425770 master-1 kubenswrapper[4740]: I1014 13:09:29.425694 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2409a99-4fb0-44cb-a711-42808935cb31-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a2409a99-4fb0-44cb-a711-42808935cb31" (UID: "a2409a99-4fb0-44cb-a711-42808935cb31"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:09:29.508626 master-1 kubenswrapper[4740]: I1014 13:09:29.508579 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2409a99-4fb0-44cb-a711-42808935cb31-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:09:29.560669 master-1 kubenswrapper[4740]: I1014 13:09:29.560595 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/etcd-guard-master-1"] Oct 14 13:09:29.576074 master-1 kubenswrapper[4740]: W1014 13:09:29.575996 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b81afc_7eb3_4303_91f8_593c130da282.slice/crio-3f9352a2ec15759067e7bc8fca769f16b8cf95c94728bbaa0e83618151bf83b2 WatchSource:0}: Error finding container 3f9352a2ec15759067e7bc8fca769f16b8cf95c94728bbaa0e83618151bf83b2: Status 404 returned error can't find the container with id 3f9352a2ec15759067e7bc8fca769f16b8cf95c94728bbaa0e83618151bf83b2 Oct 14 13:09:29.648894 master-1 kubenswrapper[4740]: I1014 13:09:29.648832 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 14 13:09:29.653009 master-1 kubenswrapper[4740]: I1014 13:09:29.652953 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-1"] Oct 14 13:09:30.263540 master-1 kubenswrapper[4740]: I1014 13:09:30.263484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8l654" event={"ID":"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1","Type":"ContainerStarted","Data":"5684092e9f7af85bf3644fe06750a466390a5887b2cb5282b6846469031c8bac"} Oct 14 13:09:30.263540 master-1 kubenswrapper[4740]: I1014 13:09:30.263536 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8l654" event={"ID":"1fcb9e98-c670-46ff-bb26-4b97cfd2b4c1","Type":"ContainerStarted","Data":"2672b9f2ba3fd36fd9a702da3b623a1f1b56dd1a61ccda763ba9a8bd218960b2"} Oct 14 13:09:30.264316 master-1 kubenswrapper[4740]: I1014 13:09:30.264277 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerStarted","Data":"57f4d6aac1f3c80fb4d6e8a8343432ff9667911716e629d1c9aa8b443a819f98"} Oct 14 13:09:30.265767 master-1 kubenswrapper[4740]: I1014 13:09:30.265300 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-1" event={"ID":"e4b81afc-7eb3-4303-91f8-593c130da282","Type":"ContainerStarted","Data":"1d1114560747d18087c4f0c588e39a779cfcec3b568d15ff9a224b39cda6161c"} Oct 14 13:09:30.265767 master-1 kubenswrapper[4740]: I1014 13:09:30.265329 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-guard-master-1" event={"ID":"e4b81afc-7eb3-4303-91f8-593c130da282","Type":"ContainerStarted","Data":"3f9352a2ec15759067e7bc8fca769f16b8cf95c94728bbaa0e83618151bf83b2"} Oct 14 13:09:30.265767 master-1 kubenswrapper[4740]: I1014 13:09:30.265345 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:09:30.265767 master-1 kubenswrapper[4740]: I1014 13:09:30.265614 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:09:30.265767 master-1 kubenswrapper[4740]: I1014 13:09:30.265644 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:09:30.267171 master-1 kubenswrapper[4740]: I1014 13:09:30.267144 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc" event={"ID":"24d7cccd-3100-4c4f-9303-fc57993b465e","Type":"ContainerStarted","Data":"9643406501fc34c7043ce8d676726d4c2255e899e811a3986de38dad468e8cf4"} Oct 14 13:09:30.269240 master-1 kubenswrapper[4740]: I1014 13:09:30.269211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" event={"ID":"405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5","Type":"ContainerStarted","Data":"2bab0c94bb779607f278d1739b5fd5d94fa0537d702b0546245249323ce0473b"} Oct 14 13:09:30.269727 master-1 kubenswrapper[4740]: I1014 13:09:30.269705 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:30.276632 master-1 kubenswrapper[4740]: I1014 13:09:30.276574 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8l654" podStartSLOduration=130.536206836 podStartE2EDuration="2m21.276561073s" podCreationTimestamp="2025-10-14 13:07:09 +0000 UTC" firstStartedPulling="2025-10-14 13:09:18.497677708 +0000 UTC m=+184.307967047" lastFinishedPulling="2025-10-14 13:09:29.238031955 +0000 UTC m=+195.048321284" observedRunningTime="2025-10-14 13:09:30.275455513 +0000 UTC m=+196.085744842" watchObservedRunningTime="2025-10-14 13:09:30.276561073 +0000 UTC m=+196.086850402" Oct 14 13:09:30.278777 master-1 kubenswrapper[4740]: I1014 13:09:30.278734 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" Oct 14 13:09:30.294638 master-1 kubenswrapper[4740]: I1014 13:09:30.294524 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-guard-master-1" podStartSLOduration=7.294507319 podStartE2EDuration="7.294507319s" podCreationTimestamp="2025-10-14 13:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:30.29199681 +0000 UTC m=+196.102286139" watchObservedRunningTime="2025-10-14 13:09:30.294507319 +0000 UTC m=+196.104796658" Oct 14 13:09:30.313934 master-1 kubenswrapper[4740]: I1014 13:09:30.313420 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5ddb89f76-xf924" podStartSLOduration=27.97673157 podStartE2EDuration="43.313403902s" podCreationTimestamp="2025-10-14 13:08:47 +0000 UTC" firstStartedPulling="2025-10-14 13:09:13.795165549 +0000 UTC m=+179.605454918" lastFinishedPulling="2025-10-14 13:09:29.131837901 +0000 UTC m=+194.942127250" observedRunningTime="2025-10-14 13:09:30.31155829 +0000 UTC m=+196.121847629" watchObservedRunningTime="2025-10-14 13:09:30.313403902 +0000 UTC m=+196.123693231" Oct 14 13:09:30.344060 master-1 kubenswrapper[4740]: I1014 13:09:30.343993 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4" podStartSLOduration=8.152007326 podStartE2EDuration="22.343974656s" podCreationTimestamp="2025-10-14 13:09:08 +0000 UTC" firstStartedPulling="2025-10-14 13:09:14.329271788 +0000 UTC m=+180.139561117" lastFinishedPulling="2025-10-14 13:09:28.521239118 +0000 UTC m=+194.331528447" observedRunningTime="2025-10-14 13:09:30.343495373 +0000 UTC m=+196.153784702" watchObservedRunningTime="2025-10-14 13:09:30.343974656 +0000 UTC m=+196.154263995" Oct 14 13:09:30.768565 master-1 kubenswrapper[4740]: I1014 13:09:30.768485 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:30.771443 master-1 kubenswrapper[4740]: I1014 13:09:30.771396 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:30.771443 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:30.771443 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:30.771443 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:30.771588 master-1 kubenswrapper[4740]: I1014 13:09:30.771470 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.816299 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-1"] Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: E1014 13:09:30.816645 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c6b650-cfb9-4098-8d7b-43e9735daa7e" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.816670 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c6b650-cfb9-4098-8d7b-43e9735daa7e" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: E1014 13:09:30.816710 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dddfa29-2bde-416f-870d-c24a4c6c67db" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.816725 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dddfa29-2bde-416f-870d-c24a4c6c67db" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: E1014 13:09:30.816743 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61b7a8e-e2be-4f11-a659-1919213dda51" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.816758 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61b7a8e-e2be-4f11-a659-1919213dda51" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: E1014 13:09:30.816783 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2409a99-4fb0-44cb-a711-42808935cb31" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.816797 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2409a99-4fb0-44cb-a711-42808935cb31" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.817108 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61b7a8e-e2be-4f11-a659-1919213dda51" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.817128 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2409a99-4fb0-44cb-a711-42808935cb31" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.817144 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c6b650-cfb9-4098-8d7b-43e9735daa7e" containerName="installer" Oct 14 13:09:30.817150 master-1 kubenswrapper[4740]: I1014 13:09:30.817172 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dddfa29-2bde-416f-870d-c24a4c6c67db" containerName="installer" Oct 14 13:09:30.817958 master-1 kubenswrapper[4740]: I1014 13:09:30.817914 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:30.821870 master-1 kubenswrapper[4740]: I1014 13:09:30.821791 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 13:09:30.824415 master-1 kubenswrapper[4740]: I1014 13:09:30.823290 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-1"] Oct 14 13:09:30.930640 master-1 kubenswrapper[4740]: I1014 13:09:30.930583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:30.930877 master-1 kubenswrapper[4740]: I1014 13:09:30.930654 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-var-lock\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:30.930877 master-1 kubenswrapper[4740]: I1014 13:09:30.930674 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kube-api-access\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:30.960645 master-1 kubenswrapper[4740]: I1014 13:09:30.960593 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dddfa29-2bde-416f-870d-c24a4c6c67db" path="/var/lib/kubelet/pods/8dddfa29-2bde-416f-870d-c24a4c6c67db/volumes" Oct 14 13:09:30.961755 master-1 kubenswrapper[4740]: I1014 13:09:30.961725 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2409a99-4fb0-44cb-a711-42808935cb31" path="/var/lib/kubelet/pods/a2409a99-4fb0-44cb-a711-42808935cb31/volumes" Oct 14 13:09:31.032505 master-1 kubenswrapper[4740]: I1014 13:09:31.032400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.032824 master-1 kubenswrapper[4740]: I1014 13:09:31.032788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-var-lock\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.033023 master-1 kubenswrapper[4740]: I1014 13:09:31.032994 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kube-api-access\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.033275 master-1 kubenswrapper[4740]: I1014 13:09:31.033183 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-var-lock\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.033399 master-1 kubenswrapper[4740]: I1014 13:09:31.033313 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.057284 master-1 kubenswrapper[4740]: I1014 13:09:31.057183 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kube-api-access\") pod \"installer-3-master-1\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.147648 master-1 kubenswrapper[4740]: I1014 13:09:31.147588 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:09:31.278291 master-1 kubenswrapper[4740]: I1014 13:09:31.277831 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:09:31.278291 master-1 kubenswrapper[4740]: I1014 13:09:31.277905 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:09:31.645187 master-1 kubenswrapper[4740]: I1014 13:09:31.645124 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-1"] Oct 14 13:09:31.697906 master-1 kubenswrapper[4740]: W1014 13:09:31.697834 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod85fdc046_3cba_4b6c_b9a2_7cb15289db21.slice/crio-d9ac79d848eb8f8a8adda739a91d1361de480017418a24ac2a0d0c22c24f6d32 WatchSource:0}: Error finding container d9ac79d848eb8f8a8adda739a91d1361de480017418a24ac2a0d0c22c24f6d32: Status 404 returned error can't find the container with id d9ac79d848eb8f8a8adda739a91d1361de480017418a24ac2a0d0c22c24f6d32 Oct 14 13:09:31.770359 master-1 kubenswrapper[4740]: I1014 13:09:31.770276 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:31.770359 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:31.770359 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:31.770359 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:31.770359 master-1 kubenswrapper[4740]: I1014 13:09:31.770347 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:32.296367 master-1 kubenswrapper[4740]: I1014 13:09:32.296303 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="a17e6de30d12f2a96c26a7839f239dfcb307d54996d4678acc925f2c00d9e55e" exitCode=0 Oct 14 13:09:32.296367 master-1 kubenswrapper[4740]: I1014 13:09:32.296362 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"a17e6de30d12f2a96c26a7839f239dfcb307d54996d4678acc925f2c00d9e55e"} Oct 14 13:09:32.297825 master-1 kubenswrapper[4740]: I1014 13:09:32.297731 4740 generic.go:334] "Generic (PLEG): container finished" podID="eae22243-e292-4623-90b4-dae431cf47dc" containerID="fe0263de8180e4d07e93f75cd5e428f39e11c32e6586b3b42beb63acb6a0eea2" exitCode=0 Oct 14 13:09:32.297825 master-1 kubenswrapper[4740]: I1014 13:09:32.297785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" event={"ID":"eae22243-e292-4623-90b4-dae431cf47dc","Type":"ContainerDied","Data":"fe0263de8180e4d07e93f75cd5e428f39e11c32e6586b3b42beb63acb6a0eea2"} Oct 14 13:09:32.298119 master-1 kubenswrapper[4740]: I1014 13:09:32.298080 4740 scope.go:117] "RemoveContainer" containerID="fe0263de8180e4d07e93f75cd5e428f39e11c32e6586b3b42beb63acb6a0eea2" Oct 14 13:09:32.299802 master-1 kubenswrapper[4740]: I1014 13:09:32.299757 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"85fdc046-3cba-4b6c-b9a2-7cb15289db21","Type":"ContainerStarted","Data":"f0f98fe430068087f973ccec5607cf0c40a14f02f8c5d600dabe075394842225"} Oct 14 13:09:32.299802 master-1 kubenswrapper[4740]: I1014 13:09:32.299788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"85fdc046-3cba-4b6c-b9a2-7cb15289db21","Type":"ContainerStarted","Data":"d9ac79d848eb8f8a8adda739a91d1361de480017418a24ac2a0d0c22c24f6d32"} Oct 14 13:09:32.332267 master-1 kubenswrapper[4740]: I1014 13:09:32.332172 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-1" podStartSLOduration=2.332153577 podStartE2EDuration="2.332153577s" podCreationTimestamp="2025-10-14 13:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:32.331488979 +0000 UTC m=+198.141778328" watchObservedRunningTime="2025-10-14 13:09:32.332153577 +0000 UTC m=+198.142442896" Oct 14 13:09:32.771034 master-1 kubenswrapper[4740]: I1014 13:09:32.770913 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:32.771034 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:32.771034 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:32.771034 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:32.771363 master-1 kubenswrapper[4740]: I1014 13:09:32.771072 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:33.189811 master-1 kubenswrapper[4740]: I1014 13:09:33.189662 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fmkcf"] Oct 14 13:09:33.190748 master-1 kubenswrapper[4740]: I1014 13:09:33.190702 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.194253 master-1 kubenswrapper[4740]: I1014 13:09:33.194171 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Oct 14 13:09:33.308565 master-1 kubenswrapper[4740]: I1014 13:09:33.308482 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="6eb39306f1e750f5ab8ca9dec1568e973919404ed5ef6123d484075d59ac469e" exitCode=0 Oct 14 13:09:33.308565 master-1 kubenswrapper[4740]: I1014 13:09:33.308554 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"6eb39306f1e750f5ab8ca9dec1568e973919404ed5ef6123d484075d59ac469e"} Oct 14 13:09:33.311346 master-1 kubenswrapper[4740]: I1014 13:09:33.311294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" event={"ID":"eae22243-e292-4623-90b4-dae431cf47dc","Type":"ContainerStarted","Data":"f5655dabf1018f785c93b92fbbbc4713ff153e0d4dbb155184adb636f3b0c938"} Oct 14 13:09:33.368441 master-1 kubenswrapper[4740]: I1014 13:09:33.368395 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cdcecfd4-6c46-4175-b7f6-5890309ea743-ready\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.368625 master-1 kubenswrapper[4740]: I1014 13:09:33.368447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cdcecfd4-6c46-4175-b7f6-5890309ea743-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.368625 master-1 kubenswrapper[4740]: I1014 13:09:33.368521 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecfd4-6c46-4175-b7f6-5890309ea743-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.368625 master-1 kubenswrapper[4740]: I1014 13:09:33.368544 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqc8j\" (UniqueName: \"kubernetes.io/projected/cdcecfd4-6c46-4175-b7f6-5890309ea743-kube-api-access-jqc8j\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.470113 master-1 kubenswrapper[4740]: I1014 13:09:33.469939 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cdcecfd4-6c46-4175-b7f6-5890309ea743-ready\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.470339 master-1 kubenswrapper[4740]: I1014 13:09:33.470279 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cdcecfd4-6c46-4175-b7f6-5890309ea743-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.470447 master-1 kubenswrapper[4740]: I1014 13:09:33.470420 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cdcecfd4-6c46-4175-b7f6-5890309ea743-ready\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.470525 master-1 kubenswrapper[4740]: I1014 13:09:33.470473 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecfd4-6c46-4175-b7f6-5890309ea743-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.470578 master-1 kubenswrapper[4740]: I1014 13:09:33.470546 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqc8j\" (UniqueName: \"kubernetes.io/projected/cdcecfd4-6c46-4175-b7f6-5890309ea743-kube-api-access-jqc8j\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.470626 master-1 kubenswrapper[4740]: I1014 13:09:33.470597 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecfd4-6c46-4175-b7f6-5890309ea743-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.471673 master-1 kubenswrapper[4740]: I1014 13:09:33.471637 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cdcecfd4-6c46-4175-b7f6-5890309ea743-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.499602 master-1 kubenswrapper[4740]: I1014 13:09:33.499544 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqc8j\" (UniqueName: \"kubernetes.io/projected/cdcecfd4-6c46-4175-b7f6-5890309ea743-kube-api-access-jqc8j\") pod \"cni-sysctl-allowlist-ds-fmkcf\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.502980 master-1 kubenswrapper[4740]: I1014 13:09:33.502932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:33.524569 master-1 kubenswrapper[4740]: W1014 13:09:33.524500 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcecfd4_6c46_4175_b7f6_5890309ea743.slice/crio-4df60ad6f2d9814b6b24a4ce8cfc4ed5e7de7111b32c033c5ecffc1639bddc79 WatchSource:0}: Error finding container 4df60ad6f2d9814b6b24a4ce8cfc4ed5e7de7111b32c033c5ecffc1639bddc79: Status 404 returned error can't find the container with id 4df60ad6f2d9814b6b24a4ce8cfc4ed5e7de7111b32c033c5ecffc1639bddc79 Oct 14 13:09:33.692663 master-1 kubenswrapper[4740]: E1014 13:09:33.692607 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5268b2f2ae2aef0c7f2e7a6e651ed702.slice/crio-0a7ed387459e762f8ccb30f7efeb5119321940481a9afbc53d82ca7fb27535c9.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:09:33.768709 master-1 kubenswrapper[4740]: I1014 13:09:33.768615 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:09:33.771045 master-1 kubenswrapper[4740]: I1014 13:09:33.770959 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:33.771045 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:33.771045 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:33.771045 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:33.771680 master-1 kubenswrapper[4740]: I1014 13:09:33.771085 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:34.303184 master-1 kubenswrapper[4740]: I1014 13:09:34.303097 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:09:34.303438 master-1 kubenswrapper[4740]: I1014 13:09:34.303215 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:09:34.316538 master-1 kubenswrapper[4740]: I1014 13:09:34.316465 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" event={"ID":"cdcecfd4-6c46-4175-b7f6-5890309ea743","Type":"ContainerStarted","Data":"542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e"} Oct 14 13:09:34.316997 master-1 kubenswrapper[4740]: I1014 13:09:34.316546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" event={"ID":"cdcecfd4-6c46-4175-b7f6-5890309ea743","Type":"ContainerStarted","Data":"4df60ad6f2d9814b6b24a4ce8cfc4ed5e7de7111b32c033c5ecffc1639bddc79"} Oct 14 13:09:34.316997 master-1 kubenswrapper[4740]: I1014 13:09:34.316863 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:34.320345 master-1 kubenswrapper[4740]: I1014 13:09:34.320305 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="0a7ed387459e762f8ccb30f7efeb5119321940481a9afbc53d82ca7fb27535c9" exitCode=0 Oct 14 13:09:34.320414 master-1 kubenswrapper[4740]: I1014 13:09:34.320347 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"0a7ed387459e762f8ccb30f7efeb5119321940481a9afbc53d82ca7fb27535c9"} Oct 14 13:09:34.332771 master-1 kubenswrapper[4740]: I1014 13:09:34.332697 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" podStartSLOduration=1.332678909 podStartE2EDuration="1.332678909s" podCreationTimestamp="2025-10-14 13:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:34.330636813 +0000 UTC m=+200.140926152" watchObservedRunningTime="2025-10-14 13:09:34.332678909 +0000 UTC m=+200.142968248" Oct 14 13:09:34.345824 master-1 kubenswrapper[4740]: I1014 13:09:34.345745 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:09:34.770963 master-1 kubenswrapper[4740]: I1014 13:09:34.770893 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:34.770963 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:34.770963 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:34.770963 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:34.771208 master-1 kubenswrapper[4740]: I1014 13:09:34.770988 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:35.189126 master-1 kubenswrapper[4740]: I1014 13:09:35.189017 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fmkcf"] Oct 14 13:09:35.332011 master-1 kubenswrapper[4740]: I1014 13:09:35.331942 4740 generic.go:334] "Generic (PLEG): container finished" podID="63a7ff79-3d66-457a-bb4a-dc851ca9d4e8" containerID="9b74c929145b31438f3513ba5ba67f7ee6219461626ba8690455042fa87245dd" exitCode=0 Oct 14 13:09:35.332615 master-1 kubenswrapper[4740]: I1014 13:09:35.332062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" event={"ID":"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8","Type":"ContainerDied","Data":"9b74c929145b31438f3513ba5ba67f7ee6219461626ba8690455042fa87245dd"} Oct 14 13:09:35.332805 master-1 kubenswrapper[4740]: I1014 13:09:35.332754 4740 scope.go:117] "RemoveContainer" containerID="9b74c929145b31438f3513ba5ba67f7ee6219461626ba8690455042fa87245dd" Oct 14 13:09:35.336426 master-1 kubenswrapper[4740]: I1014 13:09:35.336350 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"92c528acff87e6797c4e47f448ba14affce5567404dea2881436450d0a65a772"} Oct 14 13:09:35.336479 master-1 kubenswrapper[4740]: I1014 13:09:35.336435 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"3ad429dc9dd11eddee5b1383ef737b192bca643be4a667ff5b676aae5c21bf7d"} Oct 14 13:09:35.771683 master-1 kubenswrapper[4740]: I1014 13:09:35.771550 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:35.771683 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:35.771683 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:35.771683 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:35.771683 master-1 kubenswrapper[4740]: I1014 13:09:35.771662 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:36.346960 master-1 kubenswrapper[4740]: I1014 13:09:36.346836 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-7dcf5bd85b-chrmm" event={"ID":"63a7ff79-3d66-457a-bb4a-dc851ca9d4e8","Type":"ContainerStarted","Data":"802ba40330ec80545134ebba3c70361d72b12eea1deead58ac88499e0171a9c2"} Oct 14 13:09:36.349364 master-1 kubenswrapper[4740]: I1014 13:09:36.349297 4740 generic.go:334] "Generic (PLEG): container finished" podID="15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c" containerID="bf226709720cf81f2831e8db38bbdb169963c5afa56830a861340544329055d9" exitCode=0 Oct 14 13:09:36.349534 master-1 kubenswrapper[4740]: I1014 13:09:36.349424 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" event={"ID":"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c","Type":"ContainerDied","Data":"bf226709720cf81f2831e8db38bbdb169963c5afa56830a861340544329055d9"} Oct 14 13:09:36.350005 master-1 kubenswrapper[4740]: I1014 13:09:36.349947 4740 scope.go:117] "RemoveContainer" containerID="bf226709720cf81f2831e8db38bbdb169963c5afa56830a861340544329055d9" Oct 14 13:09:36.359547 master-1 kubenswrapper[4740]: I1014 13:09:36.359484 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"7e25ecf4c26d3750937766f75c49f56c564cc6efd9d78ab9478ae6db4d0034e2"} Oct 14 13:09:36.362004 master-1 kubenswrapper[4740]: I1014 13:09:36.361561 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"3388480363fa320a1eccd274b5a9a4cec5eac07b78889513af824dc57bd9ba88"} Oct 14 13:09:36.362097 master-1 kubenswrapper[4740]: I1014 13:09:36.362021 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"1c4127aa23a2bb47bd11f50f568887ce25310b4602ae0f737db8b726668165fe"} Oct 14 13:09:36.364067 master-1 kubenswrapper[4740]: I1014 13:09:36.364013 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-5l45t_3a952fbc-3908-4e41-a914-9f63f47252e4/openshift-controller-manager-operator/0.log" Oct 14 13:09:36.364156 master-1 kubenswrapper[4740]: I1014 13:09:36.364088 4740 generic.go:334] "Generic (PLEG): container finished" podID="3a952fbc-3908-4e41-a914-9f63f47252e4" containerID="6de25fc526ffef8f6555e86be736168c5607f69c1a5e7ea4f358240ec12270b9" exitCode=1 Oct 14 13:09:36.364389 master-1 kubenswrapper[4740]: I1014 13:09:36.364339 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" gracePeriod=30 Oct 14 13:09:36.364627 master-1 kubenswrapper[4740]: I1014 13:09:36.364474 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" event={"ID":"3a952fbc-3908-4e41-a914-9f63f47252e4","Type":"ContainerDied","Data":"6de25fc526ffef8f6555e86be736168c5607f69c1a5e7ea4f358240ec12270b9"} Oct 14 13:09:36.365627 master-1 kubenswrapper[4740]: I1014 13:09:36.365580 4740 scope.go:117] "RemoveContainer" containerID="6de25fc526ffef8f6555e86be736168c5607f69c1a5e7ea4f358240ec12270b9" Oct 14 13:09:36.772093 master-1 kubenswrapper[4740]: I1014 13:09:36.771961 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:36.772093 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:36.772093 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:36.772093 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:36.772093 master-1 kubenswrapper[4740]: I1014 13:09:36.772071 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:37.374332 master-1 kubenswrapper[4740]: I1014 13:09:37.374252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw" event={"ID":"15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c","Type":"ContainerStarted","Data":"13f80b23a3333b5d66361917cc5b470f56f91d89ba982e7d223fef2b008b230a"} Oct 14 13:09:37.377955 master-1 kubenswrapper[4740]: I1014 13:09:37.377877 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-5l45t_3a952fbc-3908-4e41-a914-9f63f47252e4/openshift-controller-manager-operator/0.log" Oct 14 13:09:37.378166 master-1 kubenswrapper[4740]: I1014 13:09:37.377987 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t" event={"ID":"3a952fbc-3908-4e41-a914-9f63f47252e4","Type":"ContainerStarted","Data":"62e438cb446a1934b913e9643c28de257e50b515e26f7089b87652b0aaf8567d"} Oct 14 13:09:37.398100 master-1 kubenswrapper[4740]: I1014 13:09:37.397947 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-1" podStartSLOduration=20.39791758 podStartE2EDuration="20.39791758s" podCreationTimestamp="2025-10-14 13:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:36.444058621 +0000 UTC m=+202.254347980" watchObservedRunningTime="2025-10-14 13:09:37.39791758 +0000 UTC m=+203.208206949" Oct 14 13:09:37.436834 master-1 kubenswrapper[4740]: I1014 13:09:37.436674 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:09:37.437120 master-1 kubenswrapper[4740]: E1014 13:09:37.436943 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:10:41.436925557 +0000 UTC m=+267.247214896 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:09:37.543399 master-1 kubenswrapper[4740]: I1014 13:09:37.543301 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:09:37.544209 master-1 kubenswrapper[4740]: E1014 13:09:37.544148 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:10:41.544107071 +0000 UTC m=+267.354396430 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:09:37.770066 master-1 kubenswrapper[4740]: I1014 13:09:37.769978 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:37.770066 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:37.770066 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:37.770066 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:37.770066 master-1 kubenswrapper[4740]: I1014 13:09:37.770047 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:38.125625 master-1 kubenswrapper[4740]: I1014 13:09:38.125449 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:38.125625 master-1 kubenswrapper[4740]: I1014 13:09:38.125515 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:38.771472 master-1 kubenswrapper[4740]: I1014 13:09:38.771365 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:38.771472 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:38.771472 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:38.771472 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:38.771472 master-1 kubenswrapper[4740]: I1014 13:09:38.771446 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:39.770746 master-1 kubenswrapper[4740]: I1014 13:09:39.770636 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:39.770746 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:39.770746 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:39.770746 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:39.771616 master-1 kubenswrapper[4740]: I1014 13:09:39.770744 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:39.938934 master-1 kubenswrapper[4740]: I1014 13:09:39.938858 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-p4nr9"] Oct 14 13:09:39.940571 master-1 kubenswrapper[4740]: I1014 13:09:39.940528 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.944268 master-1 kubenswrapper[4740]: I1014 13:09:39.944185 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 14 13:09:39.944464 master-1 kubenswrapper[4740]: I1014 13:09:39.944272 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 14 13:09:39.944800 master-1 kubenswrapper[4740]: I1014 13:09:39.944768 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 14 13:09:39.990713 master-1 kubenswrapper[4740]: I1014 13:09:39.990669 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-tls\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991001 master-1 kubenswrapper[4740]: I1014 13:09:39.990978 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-wtmp\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991114 master-1 kubenswrapper[4740]: I1014 13:09:39.991097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991211 master-1 kubenswrapper[4740]: I1014 13:09:39.991196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-metrics-client-ca\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991378 master-1 kubenswrapper[4740]: I1014 13:09:39.991359 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-sys\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991516 master-1 kubenswrapper[4740]: I1014 13:09:39.991499 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9ct\" (UniqueName: \"kubernetes.io/projected/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-kube-api-access-bm9ct\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991653 master-1 kubenswrapper[4740]: I1014 13:09:39.991637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-root\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:39.991838 master-1 kubenswrapper[4740]: I1014 13:09:39.991818 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-textfile\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.091752 master-1 kubenswrapper[4740]: I1014 13:09:40.091704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:09:40.093310 master-1 kubenswrapper[4740]: I1014 13:09:40.093261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-sys\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093424 master-1 kubenswrapper[4740]: I1014 13:09:40.093333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9ct\" (UniqueName: \"kubernetes.io/projected/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-kube-api-access-bm9ct\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093424 master-1 kubenswrapper[4740]: I1014 13:09:40.093366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-root\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093549 master-1 kubenswrapper[4740]: I1014 13:09:40.093427 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-textfile\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093549 master-1 kubenswrapper[4740]: I1014 13:09:40.093422 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-sys\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093549 master-1 kubenswrapper[4740]: I1014 13:09:40.093458 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-tls\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093549 master-1 kubenswrapper[4740]: I1014 13:09:40.093482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-wtmp\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093549 master-1 kubenswrapper[4740]: I1014 13:09:40.093502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.093549 master-1 kubenswrapper[4740]: I1014 13:09:40.093527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-metrics-client-ca\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.094551 master-1 kubenswrapper[4740]: I1014 13:09:40.094010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-wtmp\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.094551 master-1 kubenswrapper[4740]: I1014 13:09:40.093502 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-root\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.094551 master-1 kubenswrapper[4740]: I1014 13:09:40.094182 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-metrics-client-ca\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.094551 master-1 kubenswrapper[4740]: I1014 13:09:40.094483 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-textfile\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.098285 master-1 kubenswrapper[4740]: I1014 13:09:40.098194 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.099800 master-1 kubenswrapper[4740]: I1014 13:09:40.099728 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-node-exporter-tls\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.109656 master-1 kubenswrapper[4740]: I1014 13:09:40.109591 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9ct\" (UniqueName: \"kubernetes.io/projected/218a63b9-61b7-4ca0-b1b1-bf5cf5260960-kube-api-access-bm9ct\") pod \"node-exporter-p4nr9\" (UID: \"218a63b9-61b7-4ca0-b1b1-bf5cf5260960\") " pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.257535 master-1 kubenswrapper[4740]: I1014 13:09:40.257466 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-p4nr9" Oct 14 13:09:40.337600 master-1 kubenswrapper[4740]: W1014 13:09:40.335853 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod218a63b9_61b7_4ca0_b1b1_bf5cf5260960.slice/crio-b204b5e3ac9c63d849bce3fa9c4b26b15d0ca8e50df3f8137f9e681932a6261d WatchSource:0}: Error finding container b204b5e3ac9c63d849bce3fa9c4b26b15d0ca8e50df3f8137f9e681932a6261d: Status 404 returned error can't find the container with id b204b5e3ac9c63d849bce3fa9c4b26b15d0ca8e50df3f8137f9e681932a6261d Oct 14 13:09:40.399871 master-1 kubenswrapper[4740]: I1014 13:09:40.399805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-p4nr9" event={"ID":"218a63b9-61b7-4ca0-b1b1-bf5cf5260960","Type":"ContainerStarted","Data":"b204b5e3ac9c63d849bce3fa9c4b26b15d0ca8e50df3f8137f9e681932a6261d"} Oct 14 13:09:40.402506 master-1 kubenswrapper[4740]: I1014 13:09:40.402438 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/0.log" Oct 14 13:09:40.405299 master-1 kubenswrapper[4740]: I1014 13:09:40.403920 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="92c528acff87e6797c4e47f448ba14affce5567404dea2881436450d0a65a772" exitCode=1 Oct 14 13:09:40.405299 master-1 kubenswrapper[4740]: I1014 13:09:40.403986 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"92c528acff87e6797c4e47f448ba14affce5567404dea2881436450d0a65a772"} Oct 14 13:09:40.405299 master-1 kubenswrapper[4740]: I1014 13:09:40.404814 4740 scope.go:117] "RemoveContainer" containerID="92c528acff87e6797c4e47f448ba14affce5567404dea2881436450d0a65a772" Oct 14 13:09:40.405660 master-1 kubenswrapper[4740]: I1014 13:09:40.405619 4740 generic.go:334] "Generic (PLEG): container finished" podID="016573fd-7804-461e-83d7-1c019298f7c6" containerID="3af935dd187506e59446be2281bb2432ac402c0a0b1380df146e365a3addeab2" exitCode=0 Oct 14 13:09:40.405759 master-1 kubenswrapper[4740]: I1014 13:09:40.405679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" event={"ID":"016573fd-7804-461e-83d7-1c019298f7c6","Type":"ContainerDied","Data":"3af935dd187506e59446be2281bb2432ac402c0a0b1380df146e365a3addeab2"} Oct 14 13:09:40.406208 master-1 kubenswrapper[4740]: I1014 13:09:40.406127 4740 scope.go:117] "RemoveContainer" containerID="3af935dd187506e59446be2281bb2432ac402c0a0b1380df146e365a3addeab2" Oct 14 13:09:40.409361 master-1 kubenswrapper[4740]: I1014 13:09:40.408604 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4f3c22a-c0cd-4727-bfb4-9f92302eb13f" containerID="f3c650e199f45169804566211177e6d38ecf868a5d13c0b7308282dd019819c8" exitCode=0 Oct 14 13:09:40.409361 master-1 kubenswrapper[4740]: I1014 13:09:40.408731 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" event={"ID":"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f","Type":"ContainerDied","Data":"f3c650e199f45169804566211177e6d38ecf868a5d13c0b7308282dd019819c8"} Oct 14 13:09:40.409361 master-1 kubenswrapper[4740]: I1014 13:09:40.409342 4740 scope.go:117] "RemoveContainer" containerID="f3c650e199f45169804566211177e6d38ecf868a5d13c0b7308282dd019819c8" Oct 14 13:09:40.411169 master-1 kubenswrapper[4740]: I1014 13:09:40.411122 4740 generic.go:334] "Generic (PLEG): container finished" podID="2a2b886b-005d-4d02-a231-ddacf42775ea" containerID="3aec0d5b414dd5378b2837a6c0774b59f0068ddf7ac248756ee9c342ee243ba0" exitCode=0 Oct 14 13:09:40.411169 master-1 kubenswrapper[4740]: I1014 13:09:40.411152 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" event={"ID":"2a2b886b-005d-4d02-a231-ddacf42775ea","Type":"ContainerDied","Data":"3aec0d5b414dd5378b2837a6c0774b59f0068ddf7ac248756ee9c342ee243ba0"} Oct 14 13:09:40.411465 master-1 kubenswrapper[4740]: I1014 13:09:40.411411 4740 scope.go:117] "RemoveContainer" containerID="3aec0d5b414dd5378b2837a6c0774b59f0068ddf7ac248756ee9c342ee243ba0" Oct 14 13:09:40.770346 master-1 kubenswrapper[4740]: I1014 13:09:40.770265 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:40.770346 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:40.770346 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:40.770346 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:40.770346 master-1 kubenswrapper[4740]: I1014 13:09:40.770331 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:41.420006 master-1 kubenswrapper[4740]: I1014 13:09:41.419928 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/0.log" Oct 14 13:09:41.421796 master-1 kubenswrapper[4740]: I1014 13:09:41.421737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8"} Oct 14 13:09:41.423588 master-1 kubenswrapper[4740]: I1014 13:09:41.423541 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l" event={"ID":"016573fd-7804-461e-83d7-1c019298f7c6","Type":"ContainerStarted","Data":"f90be7cbdf881c7b72df46e7df0ff9652a4c6240eb8e973f776ab5b54fc1bc01"} Oct 14 13:09:41.425843 master-1 kubenswrapper[4740]: I1014 13:09:41.425789 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc" event={"ID":"f4f3c22a-c0cd-4727-bfb4-9f92302eb13f","Type":"ContainerStarted","Data":"c9633216a0aef83139f2df77a6bffb7cf79f60cbb960d91a2b0249ee7ee49dec"} Oct 14 13:09:41.428053 master-1 kubenswrapper[4740]: I1014 13:09:41.428007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l" event={"ID":"2a2b886b-005d-4d02-a231-ddacf42775ea","Type":"ContainerStarted","Data":"45ec9aea434fd26b9f3f20429e54d624bde696ff2db008b5c8a2ea5cce95fa38"} Oct 14 13:09:41.770687 master-1 kubenswrapper[4740]: I1014 13:09:41.770621 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:41.770687 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:41.770687 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:41.770687 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:41.770905 master-1 kubenswrapper[4740]: I1014 13:09:41.770724 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:42.434787 master-1 kubenswrapper[4740]: I1014 13:09:42.434729 4740 generic.go:334] "Generic (PLEG): container finished" podID="218a63b9-61b7-4ca0-b1b1-bf5cf5260960" containerID="aedb7db3069874084b1aed84277a6740b162b5d024070fe1c59b2c4c74a5d261" exitCode=0 Oct 14 13:09:42.434787 master-1 kubenswrapper[4740]: I1014 13:09:42.434774 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-p4nr9" event={"ID":"218a63b9-61b7-4ca0-b1b1-bf5cf5260960","Type":"ContainerDied","Data":"aedb7db3069874084b1aed84277a6740b162b5d024070fe1c59b2c4c74a5d261"} Oct 14 13:09:42.770368 master-1 kubenswrapper[4740]: I1014 13:09:42.770294 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:42.770368 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:42.770368 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:42.770368 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:42.770619 master-1 kubenswrapper[4740]: I1014 13:09:42.770400 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:42.983458 master-1 kubenswrapper[4740]: I1014 13:09:42.983348 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b"] Oct 14 13:09:42.984639 master-1 kubenswrapper[4740]: I1014 13:09:42.984610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:42.992675 master-1 kubenswrapper[4740]: I1014 13:09:42.992623 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b"] Oct 14 13:09:43.125971 master-1 kubenswrapper[4740]: I1014 13:09:43.125864 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:43.135636 master-1 kubenswrapper[4740]: I1014 13:09:43.135572 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/819cb927-5174-4df8-a723-cc07e53d9044-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-m8s2b\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.135947 master-1 kubenswrapper[4740]: I1014 13:09:43.135687 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qtzm\" (UniqueName: \"kubernetes.io/projected/819cb927-5174-4df8-a723-cc07e53d9044-kube-api-access-9qtzm\") pod \"multus-admission-controller-7b6b7bb859-m8s2b\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.237329 master-1 kubenswrapper[4740]: I1014 13:09:43.237215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/819cb927-5174-4df8-a723-cc07e53d9044-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-m8s2b\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.237589 master-1 kubenswrapper[4740]: I1014 13:09:43.237399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qtzm\" (UniqueName: \"kubernetes.io/projected/819cb927-5174-4df8-a723-cc07e53d9044-kube-api-access-9qtzm\") pod \"multus-admission-controller-7b6b7bb859-m8s2b\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.243180 master-1 kubenswrapper[4740]: I1014 13:09:43.243114 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/819cb927-5174-4df8-a723-cc07e53d9044-webhook-certs\") pod \"multus-admission-controller-7b6b7bb859-m8s2b\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.268218 master-1 kubenswrapper[4740]: I1014 13:09:43.268146 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qtzm\" (UniqueName: \"kubernetes.io/projected/819cb927-5174-4df8-a723-cc07e53d9044-kube-api-access-9qtzm\") pod \"multus-admission-controller-7b6b7bb859-m8s2b\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.350204 master-1 kubenswrapper[4740]: I1014 13:09:43.350133 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:09:43.448999 master-1 kubenswrapper[4740]: I1014 13:09:43.448272 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-p4nr9" event={"ID":"218a63b9-61b7-4ca0-b1b1-bf5cf5260960","Type":"ContainerStarted","Data":"f45f1f6a0420e5299959e84a53877c905c1629ec51e718047b8ccee0131a9c51"} Oct 14 13:09:43.448999 master-1 kubenswrapper[4740]: I1014 13:09:43.448368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-p4nr9" event={"ID":"218a63b9-61b7-4ca0-b1b1-bf5cf5260960","Type":"ContainerStarted","Data":"1a37e6b5af581a2a381243b8b538602de4dd7d6c66b794a83db0f9ea080535d9"} Oct 14 13:09:43.478374 master-1 kubenswrapper[4740]: I1014 13:09:43.478150 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-p4nr9" podStartSLOduration=3.16432825 podStartE2EDuration="4.478107728s" podCreationTimestamp="2025-10-14 13:09:39 +0000 UTC" firstStartedPulling="2025-10-14 13:09:40.337566824 +0000 UTC m=+206.147856153" lastFinishedPulling="2025-10-14 13:09:41.651346302 +0000 UTC m=+207.461635631" observedRunningTime="2025-10-14 13:09:43.468941409 +0000 UTC m=+209.279230768" watchObservedRunningTime="2025-10-14 13:09:43.478107728 +0000 UTC m=+209.288397087" Oct 14 13:09:43.505917 master-1 kubenswrapper[4740]: E1014 13:09:43.505765 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:09:43.507800 master-1 kubenswrapper[4740]: E1014 13:09:43.507737 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:09:43.509535 master-1 kubenswrapper[4740]: E1014 13:09:43.509473 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:09:43.509591 master-1 kubenswrapper[4740]: E1014 13:09:43.509544 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" Oct 14 13:09:43.651671 master-1 kubenswrapper[4740]: I1014 13:09:43.651578 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv" Oct 14 13:09:43.770531 master-1 kubenswrapper[4740]: I1014 13:09:43.770391 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:43.770531 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:43.770531 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:43.770531 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:43.770531 master-1 kubenswrapper[4740]: I1014 13:09:43.770473 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:43.812941 master-1 kubenswrapper[4740]: I1014 13:09:43.812901 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b"] Oct 14 13:09:44.303028 master-1 kubenswrapper[4740]: I1014 13:09:44.302866 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:09:44.303166 master-1 kubenswrapper[4740]: I1014 13:09:44.303026 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:09:44.465058 master-1 kubenswrapper[4740]: I1014 13:09:44.464980 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" event={"ID":"819cb927-5174-4df8-a723-cc07e53d9044","Type":"ContainerStarted","Data":"440e19c3852cce8cff9d2a27938ed42d68f52d44868ed579ebaf8cd8b1e09955"} Oct 14 13:09:44.465929 master-1 kubenswrapper[4740]: I1014 13:09:44.465067 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" event={"ID":"819cb927-5174-4df8-a723-cc07e53d9044","Type":"ContainerStarted","Data":"fe0e49ced70217b96835378cb2e4d66dc3f26f4f71857ad6f8c660fb548cbfcb"} Oct 14 13:09:44.465929 master-1 kubenswrapper[4740]: I1014 13:09:44.465100 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" event={"ID":"819cb927-5174-4df8-a723-cc07e53d9044","Type":"ContainerStarted","Data":"cb9adbe57acf28baaf717de9066dd03ed15d95d96d4942466a3cd1dc6a3a0411"} Oct 14 13:09:44.487001 master-1 kubenswrapper[4740]: I1014 13:09:44.486911 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" podStartSLOduration=2.486882996 podStartE2EDuration="2.486882996s" podCreationTimestamp="2025-10-14 13:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:09:44.482301547 +0000 UTC m=+210.292590936" watchObservedRunningTime="2025-10-14 13:09:44.486882996 +0000 UTC m=+210.297172365" Oct 14 13:09:44.512556 master-1 kubenswrapper[4740]: I1014 13:09:44.512415 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-9npgz"] Oct 14 13:09:44.513138 master-1 kubenswrapper[4740]: I1014 13:09:44.513068 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="multus-admission-controller" containerID="cri-o://5da5b33e2e38633a585455a99c0213bbadc15f83146f950b9753cdf3a2191d0a" gracePeriod=30 Oct 14 13:09:44.513367 master-1 kubenswrapper[4740]: I1014 13:09:44.513265 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="kube-rbac-proxy" containerID="cri-o://dae508e34b6e62af530a4db5d6c36d51de02b0edd600811840e76a6649c9dd75" gracePeriod=30 Oct 14 13:09:44.771266 master-1 kubenswrapper[4740]: I1014 13:09:44.771143 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:44.771266 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:44.771266 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:44.771266 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:44.771783 master-1 kubenswrapper[4740]: I1014 13:09:44.771279 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:45.277591 master-1 kubenswrapper[4740]: I1014 13:09:45.277492 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-8475fbcb68-p4n8s"] Oct 14 13:09:45.278513 master-1 kubenswrapper[4740]: I1014 13:09:45.278463 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.282953 master-1 kubenswrapper[4740]: I1014 13:09:45.282893 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 14 13:09:45.282953 master-1 kubenswrapper[4740]: I1014 13:09:45.282903 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 14 13:09:45.283268 master-1 kubenswrapper[4740]: I1014 13:09:45.282895 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 14 13:09:45.283268 master-1 kubenswrapper[4740]: I1014 13:09:45.282973 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2hutru8havafv" Oct 14 13:09:45.283627 master-1 kubenswrapper[4740]: I1014 13:09:45.283553 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 14 13:09:45.286734 master-1 kubenswrapper[4740]: I1014 13:09:45.286311 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-8475fbcb68-p4n8s"] Oct 14 13:09:45.471617 master-1 kubenswrapper[4740]: I1014 13:09:45.471537 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-metrics-server-audit-profiles\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.471617 master-1 kubenswrapper[4740]: I1014 13:09:45.471618 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fef43de0-1319-41d0-9ca4-d4795c56c459-audit-log\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.472627 master-1 kubenswrapper[4740]: I1014 13:09:45.471712 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qffb6\" (UniqueName: \"kubernetes.io/projected/fef43de0-1319-41d0-9ca4-d4795c56c459-kube-api-access-qffb6\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.472627 master-1 kubenswrapper[4740]: I1014 13:09:45.471756 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-client-certs\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.472627 master-1 kubenswrapper[4740]: I1014 13:09:45.471867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-server-tls\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.472627 master-1 kubenswrapper[4740]: I1014 13:09:45.471918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.472627 master-1 kubenswrapper[4740]: I1014 13:09:45.471981 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.477656 master-1 kubenswrapper[4740]: I1014 13:09:45.477608 4740 generic.go:334] "Generic (PLEG): container finished" podID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerID="dae508e34b6e62af530a4db5d6c36d51de02b0edd600811840e76a6649c9dd75" exitCode=0 Oct 14 13:09:45.477937 master-1 kubenswrapper[4740]: I1014 13:09:45.477692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" event={"ID":"01742ba1-f43b-4ff2-97d5-1a535e925a0f","Type":"ContainerDied","Data":"dae508e34b6e62af530a4db5d6c36d51de02b0edd600811840e76a6649c9dd75"} Oct 14 13:09:45.573130 master-1 kubenswrapper[4740]: I1014 13:09:45.572992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.573539 master-1 kubenswrapper[4740]: I1014 13:09:45.573507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-metrics-server-audit-profiles\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.573732 master-1 kubenswrapper[4740]: I1014 13:09:45.573707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fef43de0-1319-41d0-9ca4-d4795c56c459-audit-log\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.573970 master-1 kubenswrapper[4740]: I1014 13:09:45.573943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qffb6\" (UniqueName: \"kubernetes.io/projected/fef43de0-1319-41d0-9ca4-d4795c56c459-kube-api-access-qffb6\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.574115 master-1 kubenswrapper[4740]: I1014 13:09:45.574089 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-client-certs\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.574305 master-1 kubenswrapper[4740]: I1014 13:09:45.574279 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-server-tls\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.574471 master-1 kubenswrapper[4740]: I1014 13:09:45.574405 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fef43de0-1319-41d0-9ca4-d4795c56c459-audit-log\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.574612 master-1 kubenswrapper[4740]: I1014 13:09:45.574586 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.574859 master-1 kubenswrapper[4740]: I1014 13:09:45.574796 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.575765 master-1 kubenswrapper[4740]: I1014 13:09:45.575688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-metrics-server-audit-profiles\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.579724 master-1 kubenswrapper[4740]: I1014 13:09:45.579663 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-client-certs\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.580673 master-1 kubenswrapper[4740]: I1014 13:09:45.580612 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-server-tls\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.581714 master-1 kubenswrapper[4740]: I1014 13:09:45.581637 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.604153 master-1 kubenswrapper[4740]: I1014 13:09:45.604046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qffb6\" (UniqueName: \"kubernetes.io/projected/fef43de0-1319-41d0-9ca4-d4795c56c459-kube-api-access-qffb6\") pod \"metrics-server-8475fbcb68-p4n8s\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.606513 master-1 kubenswrapper[4740]: I1014 13:09:45.606449 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:45.772116 master-1 kubenswrapper[4740]: I1014 13:09:45.772019 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:45.772116 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:45.772116 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:45.772116 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:45.772116 master-1 kubenswrapper[4740]: I1014 13:09:45.772113 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:46.087909 master-1 kubenswrapper[4740]: I1014 13:09:46.087817 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-8475fbcb68-p4n8s"] Oct 14 13:09:46.092346 master-1 kubenswrapper[4740]: W1014 13:09:46.092296 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfef43de0_1319_41d0_9ca4_d4795c56c459.slice/crio-d518677c76d3497ca4266cf5076f07055ff804f4cf7d9d111123d0d3bcda4401 WatchSource:0}: Error finding container d518677c76d3497ca4266cf5076f07055ff804f4cf7d9d111123d0d3bcda4401: Status 404 returned error can't find the container with id d518677c76d3497ca4266cf5076f07055ff804f4cf7d9d111123d0d3bcda4401 Oct 14 13:09:46.487087 master-1 kubenswrapper[4740]: I1014 13:09:46.487001 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" event={"ID":"fef43de0-1319-41d0-9ca4-d4795c56c459","Type":"ContainerStarted","Data":"d518677c76d3497ca4266cf5076f07055ff804f4cf7d9d111123d0d3bcda4401"} Oct 14 13:09:46.490878 master-1 kubenswrapper[4740]: I1014 13:09:46.490837 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 14 13:09:46.493085 master-1 kubenswrapper[4740]: I1014 13:09:46.493029 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/0.log" Oct 14 13:09:46.495333 master-1 kubenswrapper[4740]: I1014 13:09:46.495273 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8" exitCode=1 Oct 14 13:09:46.495333 master-1 kubenswrapper[4740]: I1014 13:09:46.495322 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerDied","Data":"034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8"} Oct 14 13:09:46.495604 master-1 kubenswrapper[4740]: I1014 13:09:46.495367 4740 scope.go:117] "RemoveContainer" containerID="92c528acff87e6797c4e47f448ba14affce5567404dea2881436450d0a65a772" Oct 14 13:09:46.496389 master-1 kubenswrapper[4740]: I1014 13:09:46.496347 4740 scope.go:117] "RemoveContainer" containerID="034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8" Oct 14 13:09:46.496933 master-1 kubenswrapper[4740]: E1014 13:09:46.496823 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-1_openshift-etcd(5268b2f2ae2aef0c7f2e7a6e651ed702)\"" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" Oct 14 13:09:46.769699 master-1 kubenswrapper[4740]: I1014 13:09:46.769569 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/0.log" Oct 14 13:09:46.771164 master-1 kubenswrapper[4740]: I1014 13:09:46.771128 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:46.771164 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:46.771164 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:46.771164 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:46.771537 master-1 kubenswrapper[4740]: I1014 13:09:46.771499 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:46.976328 master-1 kubenswrapper[4740]: I1014 13:09:46.976260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/1.log" Oct 14 13:09:47.371217 master-1 kubenswrapper[4740]: I1014 13:09:47.371150 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-xf924_b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28/router/0.log" Oct 14 13:09:47.506150 master-1 kubenswrapper[4740]: I1014 13:09:47.505795 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 14 13:09:47.745526 master-1 kubenswrapper[4740]: I1014 13:09:47.745451 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-mgc7h"] Oct 14 13:09:47.745999 master-1 kubenswrapper[4740]: I1014 13:09:47.745901 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="multus-admission-controller" containerID="cri-o://67c17553d117fd8f968f52bb343a859674579a0e8b60300d9bbc090906179fe3" gracePeriod=30 Oct 14 13:09:47.746291 master-1 kubenswrapper[4740]: I1014 13:09:47.745996 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="kube-rbac-proxy" containerID="cri-o://b571958693e1e882b82f62f00a695871bd2fb33a9bce37964d1fc0625a97ed39" gracePeriod=30 Oct 14 13:09:47.771365 master-1 kubenswrapper[4740]: I1014 13:09:47.771273 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:47.771365 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:47.771365 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:47.771365 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:47.771850 master-1 kubenswrapper[4740]: I1014 13:09:47.771376 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:47.968789 master-1 kubenswrapper[4740]: I1014 13:09:47.968693 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c57444595-zs4m8_57cd904e-5dfb-4cc1-8bd8-8adf12b276c6/fix-audit-permissions/0.log" Oct 14 13:09:48.125983 master-1 kubenswrapper[4740]: I1014 13:09:48.125918 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:48.126220 master-1 kubenswrapper[4740]: I1014 13:09:48.126027 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:48.126220 master-1 kubenswrapper[4740]: I1014 13:09:48.126089 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:09:48.126968 master-1 kubenswrapper[4740]: I1014 13:09:48.126939 4740 scope.go:117] "RemoveContainer" containerID="034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8" Oct 14 13:09:48.127675 master-1 kubenswrapper[4740]: E1014 13:09:48.127636 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-1_openshift-etcd(5268b2f2ae2aef0c7f2e7a6e651ed702)\"" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" Oct 14 13:09:48.176711 master-1 kubenswrapper[4740]: I1014 13:09:48.176661 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c57444595-zs4m8_57cd904e-5dfb-4cc1-8bd8-8adf12b276c6/oauth-apiserver/0.log" Oct 14 13:09:48.373737 master-1 kubenswrapper[4740]: I1014 13:09:48.373551 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/1.log" Oct 14 13:09:48.519277 master-1 kubenswrapper[4740]: I1014 13:09:48.519213 4740 generic.go:334] "Generic (PLEG): container finished" podID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerID="b571958693e1e882b82f62f00a695871bd2fb33a9bce37964d1fc0625a97ed39" exitCode=0 Oct 14 13:09:48.519717 master-1 kubenswrapper[4740]: I1014 13:09:48.519261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" event={"ID":"ec085d84-4833-4e0b-9e6a-35b983a7059b","Type":"ContainerDied","Data":"b571958693e1e882b82f62f00a695871bd2fb33a9bce37964d1fc0625a97ed39"} Oct 14 13:09:48.520036 master-1 kubenswrapper[4740]: I1014 13:09:48.520015 4740 scope.go:117] "RemoveContainer" containerID="034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8" Oct 14 13:09:48.520402 master-1 kubenswrapper[4740]: E1014 13:09:48.520337 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-master-1_openshift-etcd(5268b2f2ae2aef0c7f2e7a6e651ed702)\"" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" Oct 14 13:09:48.568401 master-1 kubenswrapper[4740]: I1014 13:09:48.568351 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-guard-master-1_e4b81afc-7eb3-4303-91f8-593c130da282/guard/0.log" Oct 14 13:09:48.769805 master-1 kubenswrapper[4740]: I1014 13:09:48.769761 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/setup/0.log" Oct 14 13:09:48.770411 master-1 kubenswrapper[4740]: I1014 13:09:48.770359 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:48.770411 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:48.770411 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:48.770411 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:48.770586 master-1 kubenswrapper[4740]: I1014 13:09:48.770461 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:48.974360 master-1 kubenswrapper[4740]: I1014 13:09:48.974210 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-ensure-env-vars/0.log" Oct 14 13:09:49.169805 master-1 kubenswrapper[4740]: I1014 13:09:49.169732 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-resources-copy/0.log" Oct 14 13:09:49.303959 master-1 kubenswrapper[4740]: I1014 13:09:49.303756 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:09:49.303959 master-1 kubenswrapper[4740]: I1014 13:09:49.303907 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:09:49.370349 master-1 kubenswrapper[4740]: I1014 13:09:49.370288 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 14 13:09:49.531577 master-1 kubenswrapper[4740]: I1014 13:09:49.531515 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8b5ead9-7212-4a2f-8105-92d1c5384308" containerID="9301a402ed957f29e7bf36af46091070e2b25bc30c6da656535e4d6b92ed2fe1" exitCode=0 Oct 14 13:09:49.532635 master-1 kubenswrapper[4740]: I1014 13:09:49.531619 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" event={"ID":"f8b5ead9-7212-4a2f-8105-92d1c5384308","Type":"ContainerDied","Data":"9301a402ed957f29e7bf36af46091070e2b25bc30c6da656535e4d6b92ed2fe1"} Oct 14 13:09:49.532635 master-1 kubenswrapper[4740]: I1014 13:09:49.532397 4740 scope.go:117] "RemoveContainer" containerID="9301a402ed957f29e7bf36af46091070e2b25bc30c6da656535e4d6b92ed2fe1" Oct 14 13:09:49.535004 master-1 kubenswrapper[4740]: I1014 13:09:49.534945 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" event={"ID":"fef43de0-1319-41d0-9ca4-d4795c56c459","Type":"ContainerStarted","Data":"ca6fc295da9f3231ac56c683e895278718ac1b23a52cca0c02cbe23b7495fbcc"} Oct 14 13:09:49.535756 master-1 kubenswrapper[4740]: I1014 13:09:49.535577 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:09:49.568522 master-1 kubenswrapper[4740]: I1014 13:09:49.568347 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 14 13:09:49.574310 master-1 kubenswrapper[4740]: I1014 13:09:49.573811 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podStartSLOduration=2.127993822 podStartE2EDuration="4.57379462s" podCreationTimestamp="2025-10-14 13:09:45 +0000 UTC" firstStartedPulling="2025-10-14 13:09:46.095804935 +0000 UTC m=+211.906094274" lastFinishedPulling="2025-10-14 13:09:48.541605733 +0000 UTC m=+214.351895072" observedRunningTime="2025-10-14 13:09:49.569780046 +0000 UTC m=+215.380069385" watchObservedRunningTime="2025-10-14 13:09:49.57379462 +0000 UTC m=+215.384083959" Oct 14 13:09:49.770026 master-1 kubenswrapper[4740]: I1014 13:09:49.769914 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 14 13:09:49.772036 master-1 kubenswrapper[4740]: I1014 13:09:49.771987 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:49.772036 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:49.772036 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:49.772036 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:49.772036 master-1 kubenswrapper[4740]: I1014 13:09:49.772023 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:49.973794 master-1 kubenswrapper[4740]: I1014 13:09:49.973618 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-readyz/0.log" Oct 14 13:09:50.172465 master-1 kubenswrapper[4740]: I1014 13:09:50.172382 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 14 13:09:50.375674 master-1 kubenswrapper[4740]: I1014 13:09:50.375579 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-1_b61b7a8e-e2be-4f11-a659-1919213dda51/installer/0.log" Oct 14 13:09:50.543951 master-1 kubenswrapper[4740]: I1014 13:09:50.543884 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" event={"ID":"f8b5ead9-7212-4a2f-8105-92d1c5384308","Type":"ContainerStarted","Data":"d0c695fd6f5a21a05ba8313e84e0c663ccabc06c7b6430ac611f1df01f278b2f"} Oct 14 13:09:50.544763 master-1 kubenswrapper[4740]: I1014 13:09:50.544681 4740 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" containerID="cri-o://9301a402ed957f29e7bf36af46091070e2b25bc30c6da656535e4d6b92ed2fe1" Oct 14 13:09:50.544821 master-1 kubenswrapper[4740]: I1014 13:09:50.544763 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:09:50.576013 master-1 kubenswrapper[4740]: I1014 13:09:50.575894 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/0.log" Oct 14 13:09:50.768047 master-1 kubenswrapper[4740]: I1014 13:09:50.767958 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/kube-rbac-proxy/0.log" Oct 14 13:09:50.770754 master-1 kubenswrapper[4740]: I1014 13:09:50.770682 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:50.770754 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:50.770754 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:50.770754 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:50.771114 master-1 kubenswrapper[4740]: I1014 13:09:50.770775 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:51.173405 master-1 kubenswrapper[4740]: I1014 13:09:51.173226 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-xf924_b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28/router/0.log" Oct 14 13:09:51.354678 master-1 kubenswrapper[4740]: I1014 13:09:51.354552 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:09:51.375888 master-1 kubenswrapper[4740]: I1014 13:09:51.375789 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-bqdtw_15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c/kube-apiserver-operator/0.log" Oct 14 13:09:51.576008 master-1 kubenswrapper[4740]: I1014 13:09:51.575953 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68f5d95b74-bqdtw_15729f9f-53d1-49d7-b0ce-6b3dbdc0c95c/kube-apiserver-operator/1.log" Oct 14 13:09:51.770543 master-1 kubenswrapper[4740]: I1014 13:09:51.770459 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:51.770543 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:51.770543 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:51.770543 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:51.771050 master-1 kubenswrapper[4740]: I1014 13:09:51.770553 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:51.772984 master-1 kubenswrapper[4740]: I1014 13:09:51.772928 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_85fdc046-3cba-4b6c-b9a2-7cb15289db21/installer/0.log" Oct 14 13:09:51.973568 master-1 kubenswrapper[4740]: I1014 13:09:51.973513 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-ppzvt_772f8774-25f4-4987-bd40-8f3adda97e8b/kube-controller-manager-operator/0.log" Oct 14 13:09:52.178727 master-1 kubenswrapper[4740]: I1014 13:09:52.178639 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-5d85974df9-ppzvt_772f8774-25f4-4987-bd40-8f3adda97e8b/kube-controller-manager-operator/1.log" Oct 14 13:09:52.372036 master-1 kubenswrapper[4740]: I1014 13:09:52.371945 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-1_ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c/installer/0.log" Oct 14 13:09:52.561954 master-1 kubenswrapper[4740]: I1014 13:09:52.561881 4740 generic.go:334] "Generic (PLEG): container finished" podID="ec50d087-259f-45c0-a15a-7fe949ae66dd" containerID="216b13d5dbb6d6de55f0908c7858dde15ec479860670d3ed647a6491b5a2bb13" exitCode=0 Oct 14 13:09:52.561954 master-1 kubenswrapper[4740]: I1014 13:09:52.561934 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" event={"ID":"ec50d087-259f-45c0-a15a-7fe949ae66dd","Type":"ContainerDied","Data":"216b13d5dbb6d6de55f0908c7858dde15ec479860670d3ed647a6491b5a2bb13"} Oct 14 13:09:52.563631 master-1 kubenswrapper[4740]: I1014 13:09:52.563579 4740 scope.go:117] "RemoveContainer" containerID="216b13d5dbb6d6de55f0908c7858dde15ec479860670d3ed647a6491b5a2bb13" Oct 14 13:09:52.571383 master-1 kubenswrapper[4740]: I1014 13:09:52.571330 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6" Oct 14 13:09:52.573514 master-1 kubenswrapper[4740]: I1014 13:09:52.573375 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-766d6b44f6-gtvcp_ec50d087-259f-45c0-a15a-7fe949ae66dd/kube-scheduler-operator-container/0.log" Oct 14 13:09:52.770911 master-1 kubenswrapper[4740]: I1014 13:09:52.770865 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:52.770911 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:52.770911 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:52.770911 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:52.771540 master-1 kubenswrapper[4740]: I1014 13:09:52.770925 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:53.506334 master-1 kubenswrapper[4740]: E1014 13:09:53.506181 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:09:53.508932 master-1 kubenswrapper[4740]: E1014 13:09:53.508863 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:09:53.510536 master-1 kubenswrapper[4740]: E1014 13:09:53.510479 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:09:53.510659 master-1 kubenswrapper[4740]: E1014 13:09:53.510529 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" Oct 14 13:09:53.569662 master-1 kubenswrapper[4740]: I1014 13:09:53.569599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp" event={"ID":"ec50d087-259f-45c0-a15a-7fe949ae66dd","Type":"ContainerStarted","Data":"dc9dc2b8ec127da9a8cdb7c4fbb0b9b1be4eb5576a40ad19a6be9f525769370c"} Oct 14 13:09:53.770213 master-1 kubenswrapper[4740]: I1014 13:09:53.769999 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:53.770213 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:53.770213 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:53.770213 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:53.770213 master-1 kubenswrapper[4740]: I1014 13:09:53.770079 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:53.971062 master-1 kubenswrapper[4740]: I1014 13:09:53.970979 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/cluster-olm-operator/0.log" Oct 14 13:09:54.169880 master-1 kubenswrapper[4740]: I1014 13:09:54.169729 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/copy-catalogd-manifests/0.log" Oct 14 13:09:54.305615 master-1 kubenswrapper[4740]: I1014 13:09:54.305510 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:09:54.305615 master-1 kubenswrapper[4740]: I1014 13:09:54.305604 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:09:54.368921 master-1 kubenswrapper[4740]: I1014 13:09:54.368844 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/copy-operator-controller-manifests/0.log" Oct 14 13:09:54.577548 master-1 kubenswrapper[4740]: I1014 13:09:54.577407 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-77b56b6f4f-prtfl_f22c13e5-9b56-4f0c-a17a-677ba07226ff/cluster-olm-operator/1.log" Oct 14 13:09:54.771294 master-1 kubenswrapper[4740]: I1014 13:09:54.771134 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:54.771294 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:54.771294 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:54.771294 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:54.771294 master-1 kubenswrapper[4740]: I1014 13:09:54.771285 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:54.778095 master-1 kubenswrapper[4740]: I1014 13:09:54.777011 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-dbtvc_f4f3c22a-c0cd-4727-bfb4-9f92302eb13f/openshift-apiserver-operator/0.log" Oct 14 13:09:54.974695 master-1 kubenswrapper[4740]: I1014 13:09:54.974543 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-7d88655794-dbtvc_f4f3c22a-c0cd-4727-bfb4-9f92302eb13f/openshift-apiserver-operator/1.log" Oct 14 13:09:55.770353 master-1 kubenswrapper[4740]: I1014 13:09:55.770205 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6576f6bc9d-xfzjr_ed68870d-0f75-4bac-8f5e-36016becfd08/fix-audit-permissions/0.log" Oct 14 13:09:55.771042 master-1 kubenswrapper[4740]: I1014 13:09:55.770973 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:55.771042 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:55.771042 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:55.771042 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:55.771389 master-1 kubenswrapper[4740]: I1014 13:09:55.771188 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:55.974352 master-1 kubenswrapper[4740]: I1014 13:09:55.974279 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6576f6bc9d-xfzjr_ed68870d-0f75-4bac-8f5e-36016becfd08/openshift-apiserver/0.log" Oct 14 13:09:56.175106 master-1 kubenswrapper[4740]: I1014 13:09:56.174887 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6576f6bc9d-xfzjr_ed68870d-0f75-4bac-8f5e-36016becfd08/openshift-apiserver-check-endpoints/0.log" Oct 14 13:09:56.377312 master-1 kubenswrapper[4740]: I1014 13:09:56.377139 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/0.log" Oct 14 13:09:56.575152 master-1 kubenswrapper[4740]: I1014 13:09:56.575071 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/1.log" Oct 14 13:09:56.770843 master-1 kubenswrapper[4740]: I1014 13:09:56.770764 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:56.770843 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:56.770843 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:56.770843 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:56.771303 master-1 kubenswrapper[4740]: I1014 13:09:56.770848 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:56.777183 master-1 kubenswrapper[4740]: I1014 13:09:56.777108 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-5l45t_3a952fbc-3908-4e41-a914-9f63f47252e4/openshift-controller-manager-operator/0.log" Oct 14 13:09:56.970448 master-1 kubenswrapper[4740]: I1014 13:09:56.970262 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5745565d84-5l45t_3a952fbc-3908-4e41-a914-9f63f47252e4/openshift-controller-manager-operator/1.log" Oct 14 13:09:57.771137 master-1 kubenswrapper[4740]: I1014 13:09:57.771060 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:57.771137 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:57.771137 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:57.771137 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:57.771137 master-1 kubenswrapper[4740]: I1014 13:09:57.771138 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:57.975940 master-1 kubenswrapper[4740]: I1014 13:09:57.975864 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-f966fb6f8-dwwm2_3d292fbb-b49c-4543-993b-738103c7419b/catalog-operator/0.log" Oct 14 13:09:58.172635 master-1 kubenswrapper[4740]: I1014 13:09:58.172491 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-867f8475d9-fl56c_57526e49-7f51-4a66-8f48-0c485fc1e88f/olm-operator/0.log" Oct 14 13:09:58.370509 master-1 kubenswrapper[4740]: I1014 13:09:58.370442 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-798cc87f55-j2bjv_7be129fe-d04d-4384-a0e9-76b3148a1f3e/kube-rbac-proxy/0.log" Oct 14 13:09:58.574349 master-1 kubenswrapper[4740]: I1014 13:09:58.574207 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-798cc87f55-j2bjv_7be129fe-d04d-4384-a0e9-76b3148a1f3e/package-server-manager/0.log" Oct 14 13:09:58.771011 master-1 kubenswrapper[4740]: I1014 13:09:58.770909 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:58.771011 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:58.771011 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:58.771011 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:58.772226 master-1 kubenswrapper[4740]: I1014 13:09:58.771009 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:58.978583 master-1 kubenswrapper[4740]: I1014 13:09:58.978404 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-6f5778dccb-kwxxp_38e3dcc6-46a2-4bdd-883d-d113945b0703/packageserver/0.log" Oct 14 13:09:59.306543 master-1 kubenswrapper[4740]: I1014 13:09:59.306428 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:09:59.306919 master-1 kubenswrapper[4740]: I1014 13:09:59.306553 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:09:59.770840 master-1 kubenswrapper[4740]: I1014 13:09:59.770767 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:09:59.770840 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:09:59.770840 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:09:59.770840 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:09:59.771211 master-1 kubenswrapper[4740]: I1014 13:09:59.770870 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:09:59.944311 master-1 kubenswrapper[4740]: I1014 13:09:59.944260 4740 scope.go:117] "RemoveContainer" containerID="034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8" Oct 14 13:10:00.631520 master-1 kubenswrapper[4740]: I1014 13:10:00.631439 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 14 13:10:00.636529 master-1 kubenswrapper[4740]: I1014 13:10:00.636458 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"5268b2f2ae2aef0c7f2e7a6e651ed702","Type":"ContainerStarted","Data":"d0363272beb3e45e2b47c573ece4971be57a43ed3f3c8423ae048538797b69c8"} Oct 14 13:10:00.771804 master-1 kubenswrapper[4740]: I1014 13:10:00.771728 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:00.771804 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:00.771804 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:00.771804 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:00.772299 master-1 kubenswrapper[4740]: I1014 13:10:00.771813 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:01.771207 master-1 kubenswrapper[4740]: I1014 13:10:01.771118 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:01.771207 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:01.771207 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:01.771207 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:01.772176 master-1 kubenswrapper[4740]: I1014 13:10:01.771258 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:02.771915 master-1 kubenswrapper[4740]: I1014 13:10:02.771827 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:02.771915 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:02.771915 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:02.771915 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:02.772917 master-1 kubenswrapper[4740]: I1014 13:10:02.771912 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:03.125737 master-1 kubenswrapper[4740]: I1014 13:10:03.125678 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 14 13:10:03.267640 master-1 kubenswrapper[4740]: I1014 13:10:03.267586 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:10:03.269405 master-1 kubenswrapper[4740]: I1014 13:10:03.269378 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.307192 master-1 kubenswrapper[4740]: I1014 13:10:03.307109 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:10:03.346411 master-1 kubenswrapper[4740]: I1014 13:10:03.346349 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.346411 master-1 kubenswrapper[4740]: I1014 13:10:03.346418 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.447961 master-1 kubenswrapper[4740]: I1014 13:10:03.447853 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.448257 master-1 kubenswrapper[4740]: I1014 13:10:03.448208 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.448486 master-1 kubenswrapper[4740]: I1014 13:10:03.447985 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.448550 master-1 kubenswrapper[4740]: I1014 13:10:03.448313 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.505824 master-1 kubenswrapper[4740]: E1014 13:10:03.505752 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:10:03.507239 master-1 kubenswrapper[4740]: E1014 13:10:03.507160 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:10:03.508638 master-1 kubenswrapper[4740]: E1014 13:10:03.508600 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" cmd=["/bin/bash","-c","test -f /ready/ready"] Oct 14 13:10:03.508753 master-1 kubenswrapper[4740]: E1014 13:10:03.508732 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" Oct 14 13:10:03.602750 master-1 kubenswrapper[4740]: I1014 13:10:03.602697 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:03.621390 master-1 kubenswrapper[4740]: W1014 13:10:03.621332 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89fad8183e18ab3ad0c46d272335e5f8.slice/crio-b9e12a003bbc7c76772420501a711514135bd2c3eaf444c698e41ce4a3a777c0 WatchSource:0}: Error finding container b9e12a003bbc7c76772420501a711514135bd2c3eaf444c698e41ce4a3a777c0: Status 404 returned error can't find the container with id b9e12a003bbc7c76772420501a711514135bd2c3eaf444c698e41ce4a3a777c0 Oct 14 13:10:03.657445 master-1 kubenswrapper[4740]: I1014 13:10:03.657396 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"b9e12a003bbc7c76772420501a711514135bd2c3eaf444c698e41ce4a3a777c0"} Oct 14 13:10:03.658777 master-1 kubenswrapper[4740]: I1014 13:10:03.658741 4740 generic.go:334] "Generic (PLEG): container finished" podID="ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" containerID="eba6b60c89b0f2f1ee7e61ff4b6a123bde8c78c2f149a70b77fe188ea35718fc" exitCode=0 Oct 14 13:10:03.658777 master-1 kubenswrapper[4740]: I1014 13:10:03.658767 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c","Type":"ContainerDied","Data":"eba6b60c89b0f2f1ee7e61ff4b6a123bde8c78c2f149a70b77fe188ea35718fc"} Oct 14 13:10:03.774792 master-1 kubenswrapper[4740]: I1014 13:10:03.774737 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:03.774792 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:03.774792 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:03.774792 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:03.774792 master-1 kubenswrapper[4740]: I1014 13:10:03.774786 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:04.306979 master-1 kubenswrapper[4740]: I1014 13:10:04.306855 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:10:04.307329 master-1 kubenswrapper[4740]: I1014 13:10:04.306999 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:10:04.674057 master-1 kubenswrapper[4740]: I1014 13:10:04.673980 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_85fdc046-3cba-4b6c-b9a2-7cb15289db21/installer/0.log" Oct 14 13:10:04.674057 master-1 kubenswrapper[4740]: I1014 13:10:04.674047 4740 generic.go:334] "Generic (PLEG): container finished" podID="85fdc046-3cba-4b6c-b9a2-7cb15289db21" containerID="f0f98fe430068087f973ccec5607cf0c40a14f02f8c5d600dabe075394842225" exitCode=1 Oct 14 13:10:04.674460 master-1 kubenswrapper[4740]: I1014 13:10:04.674120 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"85fdc046-3cba-4b6c-b9a2-7cb15289db21","Type":"ContainerDied","Data":"f0f98fe430068087f973ccec5607cf0c40a14f02f8c5d600dabe075394842225"} Oct 14 13:10:04.792836 master-1 kubenswrapper[4740]: I1014 13:10:04.792775 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:04.792836 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:04.792836 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:04.792836 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:04.794060 master-1 kubenswrapper[4740]: I1014 13:10:04.792852 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:04.961949 master-1 kubenswrapper[4740]: I1014 13:10:04.961906 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:10:05.069853 master-1 kubenswrapper[4740]: I1014 13:10:05.069469 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kube-api-access\") pod \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " Oct 14 13:10:05.069853 master-1 kubenswrapper[4740]: I1014 13:10:05.069623 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kubelet-dir\") pod \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " Oct 14 13:10:05.069853 master-1 kubenswrapper[4740]: I1014 13:10:05.069672 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-var-lock\") pod \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\" (UID: \"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c\") " Oct 14 13:10:05.069853 master-1 kubenswrapper[4740]: I1014 13:10:05.069707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" (UID: "ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:05.069853 master-1 kubenswrapper[4740]: I1014 13:10:05.069815 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-var-lock" (OuterVolumeSpecName: "var-lock") pod "ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" (UID: "ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:05.070470 master-1 kubenswrapper[4740]: I1014 13:10:05.070412 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:05.070470 master-1 kubenswrapper[4740]: I1014 13:10:05.070439 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:05.072869 master-1 kubenswrapper[4740]: I1014 13:10:05.072842 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" (UID: "ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:10:05.172118 master-1 kubenswrapper[4740]: I1014 13:10:05.171994 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:05.688071 master-1 kubenswrapper[4740]: I1014 13:10:05.688009 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-1" Oct 14 13:10:05.688071 master-1 kubenswrapper[4740]: I1014 13:10:05.688050 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-1" event={"ID":"ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c","Type":"ContainerDied","Data":"22e847e0bef1c56671d5e1c4a1b3dfb603b1291e9f6aafc10706bc8255ac0942"} Oct 14 13:10:05.688505 master-1 kubenswrapper[4740]: I1014 13:10:05.688126 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e847e0bef1c56671d5e1c4a1b3dfb603b1291e9f6aafc10706bc8255ac0942" Oct 14 13:10:05.770588 master-1 kubenswrapper[4740]: I1014 13:10:05.770544 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:05.770588 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:05.770588 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:05.770588 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:05.770858 master-1 kubenswrapper[4740]: I1014 13:10:05.770597 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:05.979502 master-1 kubenswrapper[4740]: I1014 13:10:05.979451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_85fdc046-3cba-4b6c-b9a2-7cb15289db21/installer/0.log" Oct 14 13:10:05.979988 master-1 kubenswrapper[4740]: I1014 13:10:05.979529 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:10:06.083310 master-1 kubenswrapper[4740]: I1014 13:10:06.083206 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "85fdc046-3cba-4b6c-b9a2-7cb15289db21" (UID: "85fdc046-3cba-4b6c-b9a2-7cb15289db21"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:06.083310 master-1 kubenswrapper[4740]: I1014 13:10:06.083089 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kubelet-dir\") pod \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " Oct 14 13:10:06.083726 master-1 kubenswrapper[4740]: I1014 13:10:06.083501 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kube-api-access\") pod \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " Oct 14 13:10:06.083726 master-1 kubenswrapper[4740]: I1014 13:10:06.083571 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-var-lock\") pod \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\" (UID: \"85fdc046-3cba-4b6c-b9a2-7cb15289db21\") " Oct 14 13:10:06.084067 master-1 kubenswrapper[4740]: I1014 13:10:06.083973 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-var-lock" (OuterVolumeSpecName: "var-lock") pod "85fdc046-3cba-4b6c-b9a2-7cb15289db21" (UID: "85fdc046-3cba-4b6c-b9a2-7cb15289db21"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:06.084067 master-1 kubenswrapper[4740]: I1014 13:10:06.084016 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.088046 master-1 kubenswrapper[4740]: I1014 13:10:06.087998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85fdc046-3cba-4b6c-b9a2-7cb15289db21" (UID: "85fdc046-3cba-4b6c-b9a2-7cb15289db21"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:10:06.185345 master-1 kubenswrapper[4740]: I1014 13:10:06.185247 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85fdc046-3cba-4b6c-b9a2-7cb15289db21-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.185345 master-1 kubenswrapper[4740]: I1014 13:10:06.185309 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85fdc046-3cba-4b6c-b9a2-7cb15289db21-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.470804 master-1 kubenswrapper[4740]: I1014 13:10:06.470493 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-fmkcf_cdcecfd4-6c46-4175-b7f6-5890309ea743/kube-multus-additional-cni-plugins/0.log" Oct 14 13:10:06.470804 master-1 kubenswrapper[4740]: I1014 13:10:06.470549 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:10:06.490074 master-1 kubenswrapper[4740]: I1014 13:10:06.489252 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cdcecfd4-6c46-4175-b7f6-5890309ea743-cni-sysctl-allowlist\") pod \"cdcecfd4-6c46-4175-b7f6-5890309ea743\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " Oct 14 13:10:06.490074 master-1 kubenswrapper[4740]: I1014 13:10:06.489418 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqc8j\" (UniqueName: \"kubernetes.io/projected/cdcecfd4-6c46-4175-b7f6-5890309ea743-kube-api-access-jqc8j\") pod \"cdcecfd4-6c46-4175-b7f6-5890309ea743\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " Oct 14 13:10:06.490074 master-1 kubenswrapper[4740]: I1014 13:10:06.489493 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecfd4-6c46-4175-b7f6-5890309ea743-tuning-conf-dir\") pod \"cdcecfd4-6c46-4175-b7f6-5890309ea743\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " Oct 14 13:10:06.490074 master-1 kubenswrapper[4740]: I1014 13:10:06.489560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cdcecfd4-6c46-4175-b7f6-5890309ea743-ready\") pod \"cdcecfd4-6c46-4175-b7f6-5890309ea743\" (UID: \"cdcecfd4-6c46-4175-b7f6-5890309ea743\") " Oct 14 13:10:06.490074 master-1 kubenswrapper[4740]: I1014 13:10:06.489647 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecfd4-6c46-4175-b7f6-5890309ea743-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "cdcecfd4-6c46-4175-b7f6-5890309ea743" (UID: "cdcecfd4-6c46-4175-b7f6-5890309ea743"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:06.490074 master-1 kubenswrapper[4740]: I1014 13:10:06.489845 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcecfd4-6c46-4175-b7f6-5890309ea743-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "cdcecfd4-6c46-4175-b7f6-5890309ea743" (UID: "cdcecfd4-6c46-4175-b7f6-5890309ea743"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:10:06.490405 master-1 kubenswrapper[4740]: I1014 13:10:06.490160 4740 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecfd4-6c46-4175-b7f6-5890309ea743-tuning-conf-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.490405 master-1 kubenswrapper[4740]: I1014 13:10:06.490198 4740 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cdcecfd4-6c46-4175-b7f6-5890309ea743-cni-sysctl-allowlist\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.490405 master-1 kubenswrapper[4740]: I1014 13:10:06.490213 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdcecfd4-6c46-4175-b7f6-5890309ea743-ready" (OuterVolumeSpecName: "ready") pod "cdcecfd4-6c46-4175-b7f6-5890309ea743" (UID: "cdcecfd4-6c46-4175-b7f6-5890309ea743"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:10:06.493706 master-1 kubenswrapper[4740]: I1014 13:10:06.493669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcecfd4-6c46-4175-b7f6-5890309ea743-kube-api-access-jqc8j" (OuterVolumeSpecName: "kube-api-access-jqc8j") pod "cdcecfd4-6c46-4175-b7f6-5890309ea743" (UID: "cdcecfd4-6c46-4175-b7f6-5890309ea743"). InnerVolumeSpecName "kube-api-access-jqc8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:10:06.591566 master-1 kubenswrapper[4740]: I1014 13:10:06.591455 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqc8j\" (UniqueName: \"kubernetes.io/projected/cdcecfd4-6c46-4175-b7f6-5890309ea743-kube-api-access-jqc8j\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.591566 master-1 kubenswrapper[4740]: I1014 13:10:06.591533 4740 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cdcecfd4-6c46-4175-b7f6-5890309ea743-ready\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:06.696604 master-1 kubenswrapper[4740]: I1014 13:10:06.696436 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-fmkcf_cdcecfd4-6c46-4175-b7f6-5890309ea743/kube-multus-additional-cni-plugins/0.log" Oct 14 13:10:06.696604 master-1 kubenswrapper[4740]: I1014 13:10:06.696494 4740 generic.go:334] "Generic (PLEG): container finished" podID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" exitCode=137 Oct 14 13:10:06.696604 master-1 kubenswrapper[4740]: I1014 13:10:06.696581 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" Oct 14 13:10:06.697070 master-1 kubenswrapper[4740]: I1014 13:10:06.696588 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" event={"ID":"cdcecfd4-6c46-4175-b7f6-5890309ea743","Type":"ContainerDied","Data":"542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e"} Oct 14 13:10:06.697070 master-1 kubenswrapper[4740]: I1014 13:10:06.696671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fmkcf" event={"ID":"cdcecfd4-6c46-4175-b7f6-5890309ea743","Type":"ContainerDied","Data":"4df60ad6f2d9814b6b24a4ce8cfc4ed5e7de7111b32c033c5ecffc1639bddc79"} Oct 14 13:10:06.697070 master-1 kubenswrapper[4740]: I1014 13:10:06.696704 4740 scope.go:117] "RemoveContainer" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" Oct 14 13:10:06.698963 master-1 kubenswrapper[4740]: I1014 13:10:06.698888 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-1_85fdc046-3cba-4b6c-b9a2-7cb15289db21/installer/0.log" Oct 14 13:10:06.698963 master-1 kubenswrapper[4740]: I1014 13:10:06.698937 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-1" event={"ID":"85fdc046-3cba-4b6c-b9a2-7cb15289db21","Type":"ContainerDied","Data":"d9ac79d848eb8f8a8adda739a91d1361de480017418a24ac2a0d0c22c24f6d32"} Oct 14 13:10:06.698963 master-1 kubenswrapper[4740]: I1014 13:10:06.698962 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ac79d848eb8f8a8adda739a91d1361de480017418a24ac2a0d0c22c24f6d32" Oct 14 13:10:06.699256 master-1 kubenswrapper[4740]: I1014 13:10:06.699029 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-1" Oct 14 13:10:06.718021 master-1 kubenswrapper[4740]: I1014 13:10:06.717974 4740 scope.go:117] "RemoveContainer" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" Oct 14 13:10:06.724157 master-1 kubenswrapper[4740]: E1014 13:10:06.721084 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e\": container with ID starting with 542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e not found: ID does not exist" containerID="542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e" Oct 14 13:10:06.724580 master-1 kubenswrapper[4740]: I1014 13:10:06.724516 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e"} err="failed to get container status \"542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e\": rpc error: code = NotFound desc = could not find container \"542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e\": container with ID starting with 542226327fe0c8e0b402eb435d8d4ee83fd6233093d1cf13716c8e1a5f590c7e not found: ID does not exist" Oct 14 13:10:06.756597 master-1 kubenswrapper[4740]: I1014 13:10:06.756540 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fmkcf"] Oct 14 13:10:06.765697 master-1 kubenswrapper[4740]: I1014 13:10:06.765659 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fmkcf"] Oct 14 13:10:06.770494 master-1 kubenswrapper[4740]: I1014 13:10:06.770446 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:06.770494 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:06.770494 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:06.770494 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:06.770908 master-1 kubenswrapper[4740]: I1014 13:10:06.770860 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:06.952989 master-1 kubenswrapper[4740]: I1014 13:10:06.952837 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" path="/var/lib/kubelet/pods/cdcecfd4-6c46-4175-b7f6-5890309ea743/volumes" Oct 14 13:10:07.120474 master-1 kubenswrapper[4740]: I1014 13:10:07.120377 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 14 13:10:07.122783 master-1 kubenswrapper[4740]: E1014 13:10:07.122733 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" containerName="installer" Oct 14 13:10:07.123007 master-1 kubenswrapper[4740]: I1014 13:10:07.122978 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" containerName="installer" Oct 14 13:10:07.123172 master-1 kubenswrapper[4740]: E1014 13:10:07.123148 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85fdc046-3cba-4b6c-b9a2-7cb15289db21" containerName="installer" Oct 14 13:10:07.123389 master-1 kubenswrapper[4740]: I1014 13:10:07.123363 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="85fdc046-3cba-4b6c-b9a2-7cb15289db21" containerName="installer" Oct 14 13:10:07.123569 master-1 kubenswrapper[4740]: E1014 13:10:07.123545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" Oct 14 13:10:07.123712 master-1 kubenswrapper[4740]: I1014 13:10:07.123688 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" Oct 14 13:10:07.124090 master-1 kubenswrapper[4740]: I1014 13:10:07.124060 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecfd4-6c46-4175-b7f6-5890309ea743" containerName="kube-multus-additional-cni-plugins" Oct 14 13:10:07.124333 master-1 kubenswrapper[4740]: I1014 13:10:07.124303 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="85fdc046-3cba-4b6c-b9a2-7cb15289db21" containerName="installer" Oct 14 13:10:07.124528 master-1 kubenswrapper[4740]: I1014 13:10:07.124504 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4ff9ad-1c3c-4bcc-8046-1f3cdfd8fb8c" containerName="installer" Oct 14 13:10:07.125575 master-1 kubenswrapper[4740]: I1014 13:10:07.125541 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:07.128799 master-1 kubenswrapper[4740]: I1014 13:10:07.128754 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 14 13:10:07.129119 master-1 kubenswrapper[4740]: I1014 13:10:07.129095 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Oct 14 13:10:07.129588 master-1 kubenswrapper[4740]: I1014 13:10:07.129544 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"openshift-service-ca.crt" Oct 14 13:10:07.201696 master-1 kubenswrapper[4740]: I1014 13:10:07.201626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldc4f\" (UniqueName: \"kubernetes.io/projected/4d6c6f97-2228-4b4b-abd6-a4a6d00db759-kube-api-access-ldc4f\") pod \"openshift-kube-scheduler-guard-master-1\" (UID: \"4d6c6f97-2228-4b4b-abd6-a4a6d00db759\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:07.303676 master-1 kubenswrapper[4740]: I1014 13:10:07.303589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldc4f\" (UniqueName: \"kubernetes.io/projected/4d6c6f97-2228-4b4b-abd6-a4a6d00db759-kube-api-access-ldc4f\") pod \"openshift-kube-scheduler-guard-master-1\" (UID: \"4d6c6f97-2228-4b4b-abd6-a4a6d00db759\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:07.337353 master-1 kubenswrapper[4740]: I1014 13:10:07.337290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldc4f\" (UniqueName: \"kubernetes.io/projected/4d6c6f97-2228-4b4b-abd6-a4a6d00db759-kube-api-access-ldc4f\") pod \"openshift-kube-scheduler-guard-master-1\" (UID: \"4d6c6f97-2228-4b4b-abd6-a4a6d00db759\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:07.439248 master-1 kubenswrapper[4740]: I1014 13:10:07.439171 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:07.770221 master-1 kubenswrapper[4740]: I1014 13:10:07.770135 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:07.770221 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:07.770221 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:07.770221 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:07.770221 master-1 kubenswrapper[4740]: I1014 13:10:07.770209 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:07.864322 master-1 kubenswrapper[4740]: I1014 13:10:07.860616 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 14 13:10:08.126740 master-1 kubenswrapper[4740]: I1014 13:10:08.125609 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:10:08.332199 master-1 kubenswrapper[4740]: I1014 13:10:08.332140 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:10:08.720758 master-1 kubenswrapper[4740]: I1014 13:10:08.720658 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" event={"ID":"4d6c6f97-2228-4b4b-abd6-a4a6d00db759","Type":"ContainerStarted","Data":"e33a11631a62167ae9cdeda04331964b8c4b8df039030e31fdd17fff805e1ae0"} Oct 14 13:10:08.720758 master-1 kubenswrapper[4740]: I1014 13:10:08.720741 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" event={"ID":"4d6c6f97-2228-4b4b-abd6-a4a6d00db759","Type":"ContainerStarted","Data":"404b52989f46bb2583617bcfbe86ca497036e841a2fb2cb79202a85fe247e80c"} Oct 14 13:10:08.721223 master-1 kubenswrapper[4740]: I1014 13:10:08.721030 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:08.721389 master-1 kubenswrapper[4740]: I1014 13:10:08.721334 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:10:08.721488 master-1 kubenswrapper[4740]: I1014 13:10:08.721388 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:10:08.744462 master-1 kubenswrapper[4740]: I1014 13:10:08.744348 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podStartSLOduration=1.744289542 podStartE2EDuration="1.744289542s" podCreationTimestamp="2025-10-14 13:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:08.738907698 +0000 UTC m=+234.549197037" watchObservedRunningTime="2025-10-14 13:10:08.744289542 +0000 UTC m=+234.554578901" Oct 14 13:10:08.769996 master-1 kubenswrapper[4740]: I1014 13:10:08.769915 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:08.769996 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:08.769996 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:08.769996 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:08.769996 master-1 kubenswrapper[4740]: I1014 13:10:08.769987 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:09.734477 master-1 kubenswrapper[4740]: I1014 13:10:09.733662 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:10:09.734477 master-1 kubenswrapper[4740]: I1014 13:10:09.733861 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:10:09.770775 master-1 kubenswrapper[4740]: I1014 13:10:09.770703 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:09.770775 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:09.770775 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:09.770775 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:09.771339 master-1 kubenswrapper[4740]: I1014 13:10:09.770785 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:10.098459 master-1 kubenswrapper[4740]: I1014 13:10:10.098352 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:10:10.722267 master-1 kubenswrapper[4740]: I1014 13:10:10.722071 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1"] Oct 14 13:10:10.742031 master-1 kubenswrapper[4740]: I1014 13:10:10.741955 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:10:10.742031 master-1 kubenswrapper[4740]: I1014 13:10:10.742017 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:10:10.771073 master-1 kubenswrapper[4740]: I1014 13:10:10.771023 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:10.771073 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:10.771073 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:10.771073 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:10.771401 master-1 kubenswrapper[4740]: I1014 13:10:10.771090 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:11.748889 master-1 kubenswrapper[4740]: I1014 13:10:11.748806 4740 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="f385d8dcaa94ab3187b83b710fe57b0f187750d657672640e6af7430e879bf5e" exitCode=0 Oct 14 13:10:11.749514 master-1 kubenswrapper[4740]: I1014 13:10:11.748892 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerDied","Data":"f385d8dcaa94ab3187b83b710fe57b0f187750d657672640e6af7430e879bf5e"} Oct 14 13:10:11.770618 master-1 kubenswrapper[4740]: I1014 13:10:11.770522 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:11.770618 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:11.770618 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:11.770618 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:11.771075 master-1 kubenswrapper[4740]: I1014 13:10:11.770622 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:12.747432 master-1 kubenswrapper[4740]: I1014 13:10:12.747335 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:10:12.764274 master-1 kubenswrapper[4740]: I1014 13:10:12.764172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"9413841217e365c44535d9cbb2430590ab6343e3232163787d636ec31207723f"} Oct 14 13:10:12.764274 master-1 kubenswrapper[4740]: I1014 13:10:12.764247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"d3ed9cbb6f5f77f97002c046a3a9e3e350cee658f8b7fea03e390b2ecfd3b928"} Oct 14 13:10:12.771289 master-1 kubenswrapper[4740]: I1014 13:10:12.771212 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:12.771289 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:12.771289 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:12.771289 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:12.771767 master-1 kubenswrapper[4740]: I1014 13:10:12.771714 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:13.771126 master-1 kubenswrapper[4740]: I1014 13:10:13.771031 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:13.771126 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:13.771126 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:13.771126 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:13.771126 master-1 kubenswrapper[4740]: I1014 13:10:13.771084 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:13.773895 master-1 kubenswrapper[4740]: I1014 13:10:13.773842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"89fad8183e18ab3ad0c46d272335e5f8","Type":"ContainerStarted","Data":"8092a9e6ffee3c6072e897161e78ff3767262aeb08c415263028b74755398c8c"} Oct 14 13:10:13.774046 master-1 kubenswrapper[4740]: I1014 13:10:13.774021 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:10:13.797538 master-1 kubenswrapper[4740]: I1014 13:10:13.797451 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podStartSLOduration=10.797430287 podStartE2EDuration="10.797430287s" podCreationTimestamp="2025-10-14 13:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:13.795972153 +0000 UTC m=+239.606261492" watchObservedRunningTime="2025-10-14 13:10:13.797430287 +0000 UTC m=+239.607719626" Oct 14 13:10:14.771377 master-1 kubenswrapper[4740]: I1014 13:10:14.771210 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:14.771377 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:14.771377 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:14.771377 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:14.771377 master-1 kubenswrapper[4740]: I1014 13:10:14.771363 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:14.784646 master-1 kubenswrapper[4740]: I1014 13:10:14.784515 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-9npgz_01742ba1-f43b-4ff2-97d5-1a535e925a0f/multus-admission-controller/0.log" Oct 14 13:10:14.784646 master-1 kubenswrapper[4740]: I1014 13:10:14.784613 4740 generic.go:334] "Generic (PLEG): container finished" podID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerID="5da5b33e2e38633a585455a99c0213bbadc15f83146f950b9753cdf3a2191d0a" exitCode=137 Oct 14 13:10:14.786068 master-1 kubenswrapper[4740]: I1014 13:10:14.786010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" event={"ID":"01742ba1-f43b-4ff2-97d5-1a535e925a0f","Type":"ContainerDied","Data":"5da5b33e2e38633a585455a99c0213bbadc15f83146f950b9753cdf3a2191d0a"} Oct 14 13:10:15.408037 master-1 kubenswrapper[4740]: I1014 13:10:15.407990 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-9npgz_01742ba1-f43b-4ff2-97d5-1a535e925a0f/multus-admission-controller/0.log" Oct 14 13:10:15.408260 master-1 kubenswrapper[4740]: I1014 13:10:15.408072 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:10:15.432195 master-1 kubenswrapper[4740]: I1014 13:10:15.432106 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") pod \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " Oct 14 13:10:15.432660 master-1 kubenswrapper[4740]: I1014 13:10:15.432316 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq44g\" (UniqueName: \"kubernetes.io/projected/01742ba1-f43b-4ff2-97d5-1a535e925a0f-kube-api-access-wq44g\") pod \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\" (UID: \"01742ba1-f43b-4ff2-97d5-1a535e925a0f\") " Oct 14 13:10:15.435635 master-1 kubenswrapper[4740]: I1014 13:10:15.435573 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "01742ba1-f43b-4ff2-97d5-1a535e925a0f" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:10:15.440694 master-1 kubenswrapper[4740]: I1014 13:10:15.440623 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01742ba1-f43b-4ff2-97d5-1a535e925a0f-kube-api-access-wq44g" (OuterVolumeSpecName: "kube-api-access-wq44g") pod "01742ba1-f43b-4ff2-97d5-1a535e925a0f" (UID: "01742ba1-f43b-4ff2-97d5-1a535e925a0f"). InnerVolumeSpecName "kube-api-access-wq44g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:10:15.534035 master-1 kubenswrapper[4740]: I1014 13:10:15.533922 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq44g\" (UniqueName: \"kubernetes.io/projected/01742ba1-f43b-4ff2-97d5-1a535e925a0f-kube-api-access-wq44g\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:15.534035 master-1 kubenswrapper[4740]: I1014 13:10:15.533989 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/01742ba1-f43b-4ff2-97d5-1a535e925a0f-webhook-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:15.771610 master-1 kubenswrapper[4740]: I1014 13:10:15.771512 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:15.771610 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:15.771610 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:15.771610 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:15.772539 master-1 kubenswrapper[4740]: I1014 13:10:15.771618 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:15.797950 master-1 kubenswrapper[4740]: I1014 13:10:15.797895 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-9npgz_01742ba1-f43b-4ff2-97d5-1a535e925a0f/multus-admission-controller/0.log" Oct 14 13:10:15.798222 master-1 kubenswrapper[4740]: I1014 13:10:15.797983 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" event={"ID":"01742ba1-f43b-4ff2-97d5-1a535e925a0f","Type":"ContainerDied","Data":"48399003deb36067da52769965d5af83e6a3b7ae56320e44fc673696139e5026"} Oct 14 13:10:15.798222 master-1 kubenswrapper[4740]: I1014 13:10:15.798036 4740 scope.go:117] "RemoveContainer" containerID="dae508e34b6e62af530a4db5d6c36d51de02b0edd600811840e76a6649c9dd75" Oct 14 13:10:15.798222 master-1 kubenswrapper[4740]: I1014 13:10:15.798134 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-9npgz" Oct 14 13:10:15.821980 master-1 kubenswrapper[4740]: I1014 13:10:15.821931 4740 scope.go:117] "RemoveContainer" containerID="5da5b33e2e38633a585455a99c0213bbadc15f83146f950b9753cdf3a2191d0a" Oct 14 13:10:15.858418 master-1 kubenswrapper[4740]: I1014 13:10:15.857972 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-9npgz"] Oct 14 13:10:15.864567 master-1 kubenswrapper[4740]: I1014 13:10:15.864494 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-9npgz"] Oct 14 13:10:16.069557 master-1 kubenswrapper[4740]: E1014 13:10:16.069383 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" podUID="0a959dc9-9b10-4cb5-b750-bedfa6fff093" Oct 14 13:10:16.771697 master-1 kubenswrapper[4740]: I1014 13:10:16.771610 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:16.771697 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:16.771697 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:16.771697 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:16.772709 master-1 kubenswrapper[4740]: I1014 13:10:16.771705 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:16.808625 master-1 kubenswrapper[4740]: I1014 13:10:16.808555 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:10:16.955471 master-1 kubenswrapper[4740]: I1014 13:10:16.955365 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" path="/var/lib/kubelet/pods/01742ba1-f43b-4ff2-97d5-1a535e925a0f/volumes" Oct 14 13:10:17.774531 master-1 kubenswrapper[4740]: I1014 13:10:17.774454 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:17.774531 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:17.774531 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:17.774531 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:17.775245 master-1 kubenswrapper[4740]: I1014 13:10:17.774535 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:17.821541 master-1 kubenswrapper[4740]: I1014 13:10:17.821441 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-mgc7h_ec085d84-4833-4e0b-9e6a-35b983a7059b/multus-admission-controller/0.log" Oct 14 13:10:17.821541 master-1 kubenswrapper[4740]: I1014 13:10:17.821524 4740 generic.go:334] "Generic (PLEG): container finished" podID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerID="67c17553d117fd8f968f52bb343a859674579a0e8b60300d9bbc090906179fe3" exitCode=137 Oct 14 13:10:17.821977 master-1 kubenswrapper[4740]: I1014 13:10:17.821583 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" event={"ID":"ec085d84-4833-4e0b-9e6a-35b983a7059b","Type":"ContainerDied","Data":"67c17553d117fd8f968f52bb343a859674579a0e8b60300d9bbc090906179fe3"} Oct 14 13:10:18.146187 master-1 kubenswrapper[4740]: I1014 13:10:18.146131 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-1" Oct 14 13:10:18.164487 master-1 kubenswrapper[4740]: I1014 13:10:18.164415 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-1" Oct 14 13:10:18.185665 master-1 kubenswrapper[4740]: I1014 13:10:18.185572 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-mgc7h_ec085d84-4833-4e0b-9e6a-35b983a7059b/multus-admission-controller/0.log" Oct 14 13:10:18.185945 master-1 kubenswrapper[4740]: I1014 13:10:18.185687 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:10:18.280223 master-1 kubenswrapper[4740]: I1014 13:10:18.280184 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7ck6\" (UniqueName: \"kubernetes.io/projected/ec085d84-4833-4e0b-9e6a-35b983a7059b-kube-api-access-l7ck6\") pod \"ec085d84-4833-4e0b-9e6a-35b983a7059b\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " Oct 14 13:10:18.280575 master-1 kubenswrapper[4740]: I1014 13:10:18.280551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") pod \"ec085d84-4833-4e0b-9e6a-35b983a7059b\" (UID: \"ec085d84-4833-4e0b-9e6a-35b983a7059b\") " Oct 14 13:10:18.284786 master-1 kubenswrapper[4740]: I1014 13:10:18.284738 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "ec085d84-4833-4e0b-9e6a-35b983a7059b" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:10:18.285424 master-1 kubenswrapper[4740]: I1014 13:10:18.285381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec085d84-4833-4e0b-9e6a-35b983a7059b-kube-api-access-l7ck6" (OuterVolumeSpecName: "kube-api-access-l7ck6") pod "ec085d84-4833-4e0b-9e6a-35b983a7059b" (UID: "ec085d84-4833-4e0b-9e6a-35b983a7059b"). InnerVolumeSpecName "kube-api-access-l7ck6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:10:18.392372 master-1 kubenswrapper[4740]: I1014 13:10:18.389416 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ec085d84-4833-4e0b-9e6a-35b983a7059b-webhook-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:18.392372 master-1 kubenswrapper[4740]: I1014 13:10:18.389475 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7ck6\" (UniqueName: \"kubernetes.io/projected/ec085d84-4833-4e0b-9e6a-35b983a7059b-kube-api-access-l7ck6\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:18.771107 master-1 kubenswrapper[4740]: I1014 13:10:18.771021 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:18.771107 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:18.771107 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:18.771107 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:18.771594 master-1 kubenswrapper[4740]: I1014 13:10:18.771147 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:18.832490 master-1 kubenswrapper[4740]: I1014 13:10:18.832422 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-77b66fddc8-mgc7h_ec085d84-4833-4e0b-9e6a-35b983a7059b/multus-admission-controller/0.log" Oct 14 13:10:18.833298 master-1 kubenswrapper[4740]: I1014 13:10:18.832584 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" event={"ID":"ec085d84-4833-4e0b-9e6a-35b983a7059b","Type":"ContainerDied","Data":"5ecf35a02f431bb4456c5b0413049c600db729de59229f6510f04427ca56460a"} Oct 14 13:10:18.833298 master-1 kubenswrapper[4740]: I1014 13:10:18.832689 4740 scope.go:117] "RemoveContainer" containerID="b571958693e1e882b82f62f00a695871bd2fb33a9bce37964d1fc0625a97ed39" Oct 14 13:10:18.833298 master-1 kubenswrapper[4740]: I1014 13:10:18.833144 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-77b66fddc8-mgc7h" Oct 14 13:10:18.862076 master-1 kubenswrapper[4740]: I1014 13:10:18.862010 4740 scope.go:117] "RemoveContainer" containerID="67c17553d117fd8f968f52bb343a859674579a0e8b60300d9bbc090906179fe3" Oct 14 13:10:18.934050 master-1 kubenswrapper[4740]: I1014 13:10:18.933990 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-mgc7h"] Oct 14 13:10:18.972156 master-1 kubenswrapper[4740]: I1014 13:10:18.972074 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-77b66fddc8-mgc7h"] Oct 14 13:10:19.578281 master-1 kubenswrapper[4740]: E1014 13:10:19.578161 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" podUID="686cb294-f678-4e26-9305-2756573cadb8" Oct 14 13:10:19.771709 master-1 kubenswrapper[4740]: I1014 13:10:19.771593 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:19.771709 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:19.771709 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:19.771709 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:19.772158 master-1 kubenswrapper[4740]: I1014 13:10:19.771712 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:19.839818 master-1 kubenswrapper[4740]: I1014 13:10:19.839668 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:10:20.771063 master-1 kubenswrapper[4740]: I1014 13:10:20.770976 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:20.771063 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:20.771063 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:20.771063 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:20.771063 master-1 kubenswrapper[4740]: I1014 13:10:20.771049 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:20.927631 master-1 kubenswrapper[4740]: I1014 13:10:20.927570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") pod \"route-controller-manager-7f89f9db8c-j4hd5\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:10:20.928175 master-1 kubenswrapper[4740]: E1014 13:10:20.927812 4740 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:10:20.928175 master-1 kubenswrapper[4740]: E1014 13:10:20.927958 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca podName:0a959dc9-9b10-4cb5-b750-bedfa6fff093 nodeName:}" failed. No retries permitted until 2025-10-14 13:12:22.927925269 +0000 UTC m=+368.738214638 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca") pod "route-controller-manager-7f89f9db8c-j4hd5" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093") : configmap "client-ca" not found Oct 14 13:10:20.955346 master-1 kubenswrapper[4740]: I1014 13:10:20.955285 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" path="/var/lib/kubelet/pods/ec085d84-4833-4e0b-9e6a-35b983a7059b/volumes" Oct 14 13:10:21.771070 master-1 kubenswrapper[4740]: I1014 13:10:21.770971 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:21.771070 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:21.771070 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:21.771070 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:21.771070 master-1 kubenswrapper[4740]: I1014 13:10:21.771060 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:22.622312 master-1 kubenswrapper[4740]: I1014 13:10:22.622197 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: E1014 13:10:22.622629 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="multus-admission-controller" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.622656 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="multus-admission-controller" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: E1014 13:10:22.622686 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="kube-rbac-proxy" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.622700 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="kube-rbac-proxy" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: E1014 13:10:22.622733 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="multus-admission-controller" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.622748 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="multus-admission-controller" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: E1014 13:10:22.622771 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="kube-rbac-proxy" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.622784 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="kube-rbac-proxy" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.622971 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="multus-admission-controller" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.623000 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec085d84-4833-4e0b-9e6a-35b983a7059b" containerName="kube-rbac-proxy" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.623019 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="kube-rbac-proxy" Oct 14 13:10:22.623096 master-1 kubenswrapper[4740]: I1014 13:10:22.623052 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="01742ba1-f43b-4ff2-97d5-1a535e925a0f" containerName="multus-admission-controller" Oct 14 13:10:22.624290 master-1 kubenswrapper[4740]: I1014 13:10:22.624203 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.628063 master-1 kubenswrapper[4740]: I1014 13:10:22.627995 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 13:10:22.635222 master-1 kubenswrapper[4740]: I1014 13:10:22.635150 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 14 13:10:22.655803 master-1 kubenswrapper[4740]: I1014 13:10:22.655745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-var-lock\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.655995 master-1 kubenswrapper[4740]: I1014 13:10:22.655855 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a804ef07-67ce-4467-abee-1fc22d6d528f-kube-api-access\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.655995 master-1 kubenswrapper[4740]: I1014 13:10:22.655928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-kubelet-dir\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.757664 master-1 kubenswrapper[4740]: I1014 13:10:22.757578 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-var-lock\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.757949 master-1 kubenswrapper[4740]: I1014 13:10:22.757676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a804ef07-67ce-4467-abee-1fc22d6d528f-kube-api-access\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.757949 master-1 kubenswrapper[4740]: I1014 13:10:22.757743 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-kubelet-dir\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.757949 master-1 kubenswrapper[4740]: I1014 13:10:22.757739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-var-lock\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.757949 master-1 kubenswrapper[4740]: I1014 13:10:22.757910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-kubelet-dir\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.770991 master-1 kubenswrapper[4740]: I1014 13:10:22.770935 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:22.770991 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:22.770991 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:22.770991 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:22.771280 master-1 kubenswrapper[4740]: I1014 13:10:22.771003 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:22.789158 master-1 kubenswrapper[4740]: I1014 13:10:22.789071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a804ef07-67ce-4467-abee-1fc22d6d528f-kube-api-access\") pod \"installer-3-retry-1-master-1\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:22.950097 master-1 kubenswrapper[4740]: I1014 13:10:22.949910 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:23.445157 master-1 kubenswrapper[4740]: I1014 13:10:23.445064 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 14 13:10:23.452677 master-1 kubenswrapper[4740]: W1014 13:10:23.452605 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda804ef07_67ce_4467_abee_1fc22d6d528f.slice/crio-c1a76217927e7639386e63a8649f82b5eb1cb9c12cafe6985144fadd4993ce85 WatchSource:0}: Error finding container c1a76217927e7639386e63a8649f82b5eb1cb9c12cafe6985144fadd4993ce85: Status 404 returned error can't find the container with id c1a76217927e7639386e63a8649f82b5eb1cb9c12cafe6985144fadd4993ce85 Oct 14 13:10:23.771484 master-1 kubenswrapper[4740]: I1014 13:10:23.771406 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:23.771484 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:23.771484 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:23.771484 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:23.772599 master-1 kubenswrapper[4740]: I1014 13:10:23.771515 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:23.870255 master-1 kubenswrapper[4740]: I1014 13:10:23.870158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"a804ef07-67ce-4467-abee-1fc22d6d528f","Type":"ContainerStarted","Data":"92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7"} Oct 14 13:10:23.870255 master-1 kubenswrapper[4740]: I1014 13:10:23.870259 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"a804ef07-67ce-4467-abee-1fc22d6d528f","Type":"ContainerStarted","Data":"c1a76217927e7639386e63a8649f82b5eb1cb9c12cafe6985144fadd4993ce85"} Oct 14 13:10:23.897266 master-1 kubenswrapper[4740]: I1014 13:10:23.897140 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" podStartSLOduration=1.897111141 podStartE2EDuration="1.897111141s" podCreationTimestamp="2025-10-14 13:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:23.893130293 +0000 UTC m=+249.703419662" watchObservedRunningTime="2025-10-14 13:10:23.897111141 +0000 UTC m=+249.707400510" Oct 14 13:10:24.488031 master-1 kubenswrapper[4740]: I1014 13:10:24.487924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") pod \"controller-manager-bcf7659b-pckjm\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:10:24.488381 master-1 kubenswrapper[4740]: E1014 13:10:24.488182 4740 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Oct 14 13:10:24.488381 master-1 kubenswrapper[4740]: E1014 13:10:24.488343 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca podName:686cb294-f678-4e26-9305-2756573cadb8 nodeName:}" failed. No retries permitted until 2025-10-14 13:12:26.488311965 +0000 UTC m=+372.298601334 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca") pod "controller-manager-bcf7659b-pckjm" (UID: "686cb294-f678-4e26-9305-2756573cadb8") : configmap "client-ca" not found Oct 14 13:10:24.771363 master-1 kubenswrapper[4740]: I1014 13:10:24.771174 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:24.771363 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:24.771363 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:24.771363 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:24.771363 master-1 kubenswrapper[4740]: I1014 13:10:24.771295 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:25.619748 master-1 kubenswrapper[4740]: I1014 13:10:25.619668 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:10:25.771144 master-1 kubenswrapper[4740]: I1014 13:10:25.771022 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:25.771144 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:25.771144 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:25.771144 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:25.771568 master-1 kubenswrapper[4740]: I1014 13:10:25.771204 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:26.775558 master-1 kubenswrapper[4740]: I1014 13:10:26.775480 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:26.775558 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:26.775558 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:26.775558 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:26.776753 master-1 kubenswrapper[4740]: I1014 13:10:26.775585 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:27.771445 master-1 kubenswrapper[4740]: I1014 13:10:27.771367 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:27.771445 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:27.771445 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:27.771445 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:27.771908 master-1 kubenswrapper[4740]: I1014 13:10:27.771470 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:28.770799 master-1 kubenswrapper[4740]: I1014 13:10:28.770711 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:28.770799 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:28.770799 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:28.770799 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:28.770799 master-1 kubenswrapper[4740]: I1014 13:10:28.770793 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:28.920914 master-1 kubenswrapper[4740]: I1014 13:10:28.920806 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-1"] Oct 14 13:10:28.922882 master-1 kubenswrapper[4740]: I1014 13:10:28.922818 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:28.935684 master-1 kubenswrapper[4740]: I1014 13:10:28.935598 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-1"] Oct 14 13:10:28.955993 master-1 kubenswrapper[4740]: I1014 13:10:28.955922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kube-api-access\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:28.956200 master-1 kubenswrapper[4740]: I1014 13:10:28.956064 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:28.956200 master-1 kubenswrapper[4740]: I1014 13:10:28.956116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-var-lock\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.057153 master-1 kubenswrapper[4740]: I1014 13:10:29.056950 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-var-lock\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.057153 master-1 kubenswrapper[4740]: I1014 13:10:29.057047 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kube-api-access\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.057153 master-1 kubenswrapper[4740]: I1014 13:10:29.057070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-var-lock\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.057590 master-1 kubenswrapper[4740]: I1014 13:10:29.057326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.057590 master-1 kubenswrapper[4740]: I1014 13:10:29.057505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.090001 master-1 kubenswrapper[4740]: I1014 13:10:29.089922 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kube-api-access\") pod \"installer-5-master-1\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.243744 master-1 kubenswrapper[4740]: I1014 13:10:29.243661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:10:29.708146 master-1 kubenswrapper[4740]: I1014 13:10:29.707983 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-1"] Oct 14 13:10:29.770521 master-1 kubenswrapper[4740]: I1014 13:10:29.770471 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:29.770521 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:29.770521 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:29.770521 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:29.770882 master-1 kubenswrapper[4740]: I1014 13:10:29.770523 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:29.914048 master-1 kubenswrapper[4740]: I1014 13:10:29.913992 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"29394c7c-e1cf-4e8e-abef-d50e9466a5a6","Type":"ContainerStarted","Data":"c3f717cff0e9cb77bfb2ca206186d5dc520cb695f90da457c9152dec6b196854"} Oct 14 13:10:30.785458 master-1 kubenswrapper[4740]: I1014 13:10:30.785382 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:30.785458 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:30.785458 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:30.785458 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:30.786469 master-1 kubenswrapper[4740]: I1014 13:10:30.785469 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:30.926036 master-1 kubenswrapper[4740]: I1014 13:10:30.925935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"29394c7c-e1cf-4e8e-abef-d50e9466a5a6","Type":"ContainerStarted","Data":"4fd0324278c14bdc7968a2293eb7c15d589ae35f2214f59eeecf0fd590986edd"} Oct 14 13:10:30.950999 master-1 kubenswrapper[4740]: I1014 13:10:30.950851 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-1" podStartSLOduration=2.950805798 podStartE2EDuration="2.950805798s" podCreationTimestamp="2025-10-14 13:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:30.948688971 +0000 UTC m=+256.758978330" watchObservedRunningTime="2025-10-14 13:10:30.950805798 +0000 UTC m=+256.761095167" Oct 14 13:10:31.770794 master-1 kubenswrapper[4740]: I1014 13:10:31.770679 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:31.770794 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:31.770794 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:31.770794 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:31.771492 master-1 kubenswrapper[4740]: I1014 13:10:31.770830 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:32.771289 master-1 kubenswrapper[4740]: I1014 13:10:32.771196 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:32.771289 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:32.771289 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:32.771289 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:32.771987 master-1 kubenswrapper[4740]: I1014 13:10:32.771319 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:33.770829 master-1 kubenswrapper[4740]: I1014 13:10:33.770762 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:33.770829 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:33.770829 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:33.770829 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:33.772014 master-1 kubenswrapper[4740]: I1014 13:10:33.771448 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:33.819369 master-1 kubenswrapper[4740]: I1014 13:10:33.819281 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 14 13:10:33.820421 master-1 kubenswrapper[4740]: I1014 13:10:33.820372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:33.823775 master-1 kubenswrapper[4740]: I1014 13:10:33.823718 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 14 13:10:33.831721 master-1 kubenswrapper[4740]: I1014 13:10:33.831389 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 14 13:10:33.933582 master-1 kubenswrapper[4740]: I1014 13:10:33.933379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kube-api-access\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:33.933941 master-1 kubenswrapper[4740]: I1014 13:10:33.933867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:33.934303 master-1 kubenswrapper[4740]: I1014 13:10:33.934211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-var-lock\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.021138 master-1 kubenswrapper[4740]: I1014 13:10:34.020918 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 14 13:10:34.021462 master-1 kubenswrapper[4740]: I1014 13:10:34.021363 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" podUID="a804ef07-67ce-4467-abee-1fc22d6d528f" containerName="installer" containerID="cri-o://92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7" gracePeriod=30 Oct 14 13:10:34.035374 master-1 kubenswrapper[4740]: I1014 13:10:34.035328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-var-lock\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.035703 master-1 kubenswrapper[4740]: I1014 13:10:34.035494 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-var-lock\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.035812 master-1 kubenswrapper[4740]: I1014 13:10:34.035669 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kube-api-access\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.036068 master-1 kubenswrapper[4740]: I1014 13:10:34.036020 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.036297 master-1 kubenswrapper[4740]: I1014 13:10:34.036217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kubelet-dir\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.103852 master-1 kubenswrapper[4740]: I1014 13:10:34.103773 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kube-api-access\") pod \"installer-1-master-1\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.145440 master-1 kubenswrapper[4740]: I1014 13:10:34.145292 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:10:34.625681 master-1 kubenswrapper[4740]: I1014 13:10:34.625513 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 14 13:10:34.636423 master-1 kubenswrapper[4740]: W1014 13:10:34.636334 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod946295a4_6f1e_44dd_a7f4_ab062bf3f4b9.slice/crio-cf093794044825a5f5c57160c7400f9bc5cf0ec0224001d1c365593bee764872 WatchSource:0}: Error finding container cf093794044825a5f5c57160c7400f9bc5cf0ec0224001d1c365593bee764872: Status 404 returned error can't find the container with id cf093794044825a5f5c57160c7400f9bc5cf0ec0224001d1c365593bee764872 Oct 14 13:10:34.771396 master-1 kubenswrapper[4740]: I1014 13:10:34.771293 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:34.771396 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:34.771396 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:34.771396 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:34.772479 master-1 kubenswrapper[4740]: I1014 13:10:34.771399 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:34.956563 master-1 kubenswrapper[4740]: I1014 13:10:34.955669 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9","Type":"ContainerStarted","Data":"cf093794044825a5f5c57160c7400f9bc5cf0ec0224001d1c365593bee764872"} Oct 14 13:10:35.771569 master-1 kubenswrapper[4740]: I1014 13:10:35.771494 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:35.771569 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:35.771569 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:35.771569 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:35.772202 master-1 kubenswrapper[4740]: I1014 13:10:35.771589 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:35.964131 master-1 kubenswrapper[4740]: I1014 13:10:35.964060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9","Type":"ContainerStarted","Data":"97abe0d8c7e85255ddcf3f08db5d8fadc02560d6e693cb64ea478661abddbf69"} Oct 14 13:10:35.990302 master-1 kubenswrapper[4740]: I1014 13:10:35.990151 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-1" podStartSLOduration=2.990122058 podStartE2EDuration="2.990122058s" podCreationTimestamp="2025-10-14 13:10:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:35.98795857 +0000 UTC m=+261.798247929" watchObservedRunningTime="2025-10-14 13:10:35.990122058 +0000 UTC m=+261.800411417" Oct 14 13:10:36.417070 master-1 kubenswrapper[4740]: I1014 13:10:36.416974 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-1"] Oct 14 13:10:36.418185 master-1 kubenswrapper[4740]: I1014 13:10:36.418133 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.447868 master-1 kubenswrapper[4740]: I1014 13:10:36.447779 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-1"] Oct 14 13:10:36.473699 master-1 kubenswrapper[4740]: I1014 13:10:36.473611 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-var-lock\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.474083 master-1 kubenswrapper[4740]: I1014 13:10:36.474014 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.474171 master-1 kubenswrapper[4740]: I1014 13:10:36.474087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kube-api-access\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.524595 master-1 kubenswrapper[4740]: E1014 13:10:36.524495 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podUID="cc579fa5-c1e0-40ed-b1f3-e953a42e74d6" Oct 14 13:10:36.575652 master-1 kubenswrapper[4740]: I1014 13:10:36.575560 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kube-api-access\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.575652 master-1 kubenswrapper[4740]: I1014 13:10:36.575634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.575983 master-1 kubenswrapper[4740]: I1014 13:10:36.575723 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-var-lock\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.575983 master-1 kubenswrapper[4740]: I1014 13:10:36.575889 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.575983 master-1 kubenswrapper[4740]: I1014 13:10:36.575960 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-var-lock\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.618257 master-1 kubenswrapper[4740]: E1014 13:10:36.618177 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podUID="180ced15-1fb1-464d-85f2-0bcc0d836dab" Oct 14 13:10:36.771821 master-1 kubenswrapper[4740]: I1014 13:10:36.771686 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:36.771821 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:36.771821 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:36.771821 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:36.772882 master-1 kubenswrapper[4740]: I1014 13:10:36.771811 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:36.822044 master-1 kubenswrapper[4740]: I1014 13:10:36.821948 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kube-api-access\") pod \"installer-4-master-1\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:36.984810 master-1 kubenswrapper[4740]: I1014 13:10:36.983950 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:10:36.985134 master-1 kubenswrapper[4740]: I1014 13:10:36.984921 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:10:37.038848 master-1 kubenswrapper[4740]: I1014 13:10:37.038666 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:10:37.541097 master-1 kubenswrapper[4740]: I1014 13:10:37.541027 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-1"] Oct 14 13:10:37.547409 master-1 kubenswrapper[4740]: W1014 13:10:37.547360 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod32fb33a3_6da2_4d25_b5e9_799604d68cc9.slice/crio-a03fb1716c8fdb868d08d31a7ef51262391ab20de3ba20469a09d35614541922 WatchSource:0}: Error finding container a03fb1716c8fdb868d08d31a7ef51262391ab20de3ba20469a09d35614541922: Status 404 returned error can't find the container with id a03fb1716c8fdb868d08d31a7ef51262391ab20de3ba20469a09d35614541922 Oct 14 13:10:37.771088 master-1 kubenswrapper[4740]: I1014 13:10:37.770953 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:37.771088 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:37.771088 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:37.771088 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:37.771368 master-1 kubenswrapper[4740]: I1014 13:10:37.771115 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:37.993646 master-1 kubenswrapper[4740]: I1014 13:10:37.993576 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"32fb33a3-6da2-4d25-b5e9-799604d68cc9","Type":"ContainerStarted","Data":"a03fb1716c8fdb868d08d31a7ef51262391ab20de3ba20469a09d35614541922"} Oct 14 13:10:38.772125 master-1 kubenswrapper[4740]: I1014 13:10:38.771999 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:38.772125 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:38.772125 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:38.772125 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:38.772585 master-1 kubenswrapper[4740]: I1014 13:10:38.772122 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:39.002957 master-1 kubenswrapper[4740]: I1014 13:10:39.002858 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"32fb33a3-6da2-4d25-b5e9-799604d68cc9","Type":"ContainerStarted","Data":"cf6d422071d841561c6f78de1237a1026480edd8f4b4fe6ca27ad710e5faa5ea"} Oct 14 13:10:39.393191 master-1 kubenswrapper[4740]: I1014 13:10:39.393092 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-1" podStartSLOduration=3.393065005 podStartE2EDuration="3.393065005s" podCreationTimestamp="2025-10-14 13:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:39.390266929 +0000 UTC m=+265.200556318" watchObservedRunningTime="2025-10-14 13:10:39.393065005 +0000 UTC m=+265.203354344" Oct 14 13:10:39.771300 master-1 kubenswrapper[4740]: I1014 13:10:39.771131 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:39.771300 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:39.771300 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:39.771300 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:39.771300 master-1 kubenswrapper[4740]: I1014 13:10:39.771262 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:40.105346 master-1 kubenswrapper[4740]: I1014 13:10:40.104915 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:10:40.771530 master-1 kubenswrapper[4740]: I1014 13:10:40.771469 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:40.771530 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:40.771530 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:40.771530 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:40.771987 master-1 kubenswrapper[4740]: I1014 13:10:40.771546 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:41.454878 master-1 kubenswrapper[4740]: I1014 13:10:41.454738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:10:41.455946 master-1 kubenswrapper[4740]: E1014 13:10:41.455011 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:12:43.454966673 +0000 UTC m=+389.265256032 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:10:41.556144 master-1 kubenswrapper[4740]: I1014 13:10:41.556055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:10:41.556494 master-1 kubenswrapper[4740]: E1014 13:10:41.556441 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:12:43.556413215 +0000 UTC m=+389.366702574 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:10:41.771470 master-1 kubenswrapper[4740]: I1014 13:10:41.771384 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:41.771470 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:41.771470 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:41.771470 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:41.771953 master-1 kubenswrapper[4740]: I1014 13:10:41.771498 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:42.771353 master-1 kubenswrapper[4740]: I1014 13:10:42.771214 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:42.771353 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:42.771353 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:42.771353 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:42.772159 master-1 kubenswrapper[4740]: I1014 13:10:42.771365 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:43.771487 master-1 kubenswrapper[4740]: I1014 13:10:43.771392 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:43.771487 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:43.771487 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:43.771487 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:43.772561 master-1 kubenswrapper[4740]: I1014 13:10:43.771491 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:44.771776 master-1 kubenswrapper[4740]: I1014 13:10:44.771647 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:44.771776 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:44.771776 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:44.771776 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:44.772909 master-1 kubenswrapper[4740]: I1014 13:10:44.771800 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:45.770987 master-1 kubenswrapper[4740]: I1014 13:10:45.770889 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:45.770987 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:45.770987 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:45.770987 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:45.770987 master-1 kubenswrapper[4740]: I1014 13:10:45.770978 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:46.771148 master-1 kubenswrapper[4740]: I1014 13:10:46.771044 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:46.771148 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:46.771148 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:46.771148 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:46.772864 master-1 kubenswrapper[4740]: I1014 13:10:46.771157 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:47.771137 master-1 kubenswrapper[4740]: I1014 13:10:47.771061 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:47.771137 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:47.771137 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:47.771137 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:47.771880 master-1 kubenswrapper[4740]: I1014 13:10:47.771151 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:48.220438 master-1 kubenswrapper[4740]: I1014 13:10:48.218406 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-j76rq"] Oct 14 13:10:48.220438 master-1 kubenswrapper[4740]: I1014 13:10:48.219583 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.224804 master-1 kubenswrapper[4740]: I1014 13:10:48.224760 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 14 13:10:48.224965 master-1 kubenswrapper[4740]: I1014 13:10:48.224826 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 14 13:10:48.225142 master-1 kubenswrapper[4740]: I1014 13:10:48.225110 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 14 13:10:48.250575 master-1 kubenswrapper[4740]: I1014 13:10:48.246861 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-j76rq"] Oct 14 13:10:48.265129 master-1 kubenswrapper[4740]: I1014 13:10:48.265066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7772z\" (UniqueName: \"kubernetes.io/projected/b102298d-f60b-4003-b0b2-55cbada95967-kube-api-access-7772z\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.265666 master-1 kubenswrapper[4740]: I1014 13:10:48.265583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b102298d-f60b-4003-b0b2-55cbada95967-cert\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.367133 master-1 kubenswrapper[4740]: I1014 13:10:48.367052 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b102298d-f60b-4003-b0b2-55cbada95967-cert\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.367493 master-1 kubenswrapper[4740]: I1014 13:10:48.367189 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7772z\" (UniqueName: \"kubernetes.io/projected/b102298d-f60b-4003-b0b2-55cbada95967-kube-api-access-7772z\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.367493 master-1 kubenswrapper[4740]: E1014 13:10:48.367338 4740 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Oct 14 13:10:48.367493 master-1 kubenswrapper[4740]: E1014 13:10:48.367463 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b102298d-f60b-4003-b0b2-55cbada95967-cert podName:b102298d-f60b-4003-b0b2-55cbada95967 nodeName:}" failed. No retries permitted until 2025-10-14 13:10:48.867428447 +0000 UTC m=+274.677717816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b102298d-f60b-4003-b0b2-55cbada95967-cert") pod "ingress-canary-j76rq" (UID: "b102298d-f60b-4003-b0b2-55cbada95967") : secret "canary-serving-cert" not found Oct 14 13:10:48.399649 master-1 kubenswrapper[4740]: I1014 13:10:48.399568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7772z\" (UniqueName: \"kubernetes.io/projected/b102298d-f60b-4003-b0b2-55cbada95967-kube-api-access-7772z\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.776629 master-1 kubenswrapper[4740]: I1014 13:10:48.776535 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:48.776629 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:48.776629 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:48.776629 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:48.777604 master-1 kubenswrapper[4740]: I1014 13:10:48.776667 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:48.880007 master-1 kubenswrapper[4740]: I1014 13:10:48.879900 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b102298d-f60b-4003-b0b2-55cbada95967-cert\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:48.884397 master-1 kubenswrapper[4740]: I1014 13:10:48.884318 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b102298d-f60b-4003-b0b2-55cbada95967-cert\") pod \"ingress-canary-j76rq\" (UID: \"b102298d-f60b-4003-b0b2-55cbada95967\") " pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:49.075222 master-1 kubenswrapper[4740]: I1014 13:10:49.075014 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/0.log" Oct 14 13:10:49.075222 master-1 kubenswrapper[4740]: I1014 13:10:49.075118 4740 generic.go:334] "Generic (PLEG): container finished" podID="398ba6fd-0f8f-46af-b690-61a6eec9176b" containerID="8c02147a25c6590fc2f39f47ab7a6cfafc0656844334bfba1f068b3fe5d01610" exitCode=1 Oct 14 13:10:49.075222 master-1 kubenswrapper[4740]: I1014 13:10:49.075178 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerDied","Data":"8c02147a25c6590fc2f39f47ab7a6cfafc0656844334bfba1f068b3fe5d01610"} Oct 14 13:10:49.076122 master-1 kubenswrapper[4740]: I1014 13:10:49.076063 4740 scope.go:117] "RemoveContainer" containerID="8c02147a25c6590fc2f39f47ab7a6cfafc0656844334bfba1f068b3fe5d01610" Oct 14 13:10:49.137322 master-1 kubenswrapper[4740]: I1014 13:10:49.137258 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-j76rq" Oct 14 13:10:49.685842 master-1 kubenswrapper[4740]: I1014 13:10:49.685148 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-j76rq"] Oct 14 13:10:49.690265 master-1 kubenswrapper[4740]: W1014 13:10:49.690165 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb102298d_f60b_4003_b0b2_55cbada95967.slice/crio-7af90c9a732b018eeede1efb79848ba909530bdbd7ce3164b511f089e295ae02 WatchSource:0}: Error finding container 7af90c9a732b018eeede1efb79848ba909530bdbd7ce3164b511f089e295ae02: Status 404 returned error can't find the container with id 7af90c9a732b018eeede1efb79848ba909530bdbd7ce3164b511f089e295ae02 Oct 14 13:10:49.770998 master-1 kubenswrapper[4740]: I1014 13:10:49.770934 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:49.770998 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:49.770998 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:49.770998 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:49.771437 master-1 kubenswrapper[4740]: I1014 13:10:49.771021 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:50.085951 master-1 kubenswrapper[4740]: I1014 13:10:50.085880 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/0.log" Oct 14 13:10:50.086523 master-1 kubenswrapper[4740]: I1014 13:10:50.086060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerStarted","Data":"4642cf87216d34a41602fbb9cf593d0d329fd43c67ed7b264d9a3b2b3022daaf"} Oct 14 13:10:50.087784 master-1 kubenswrapper[4740]: I1014 13:10:50.087728 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-j76rq" event={"ID":"b102298d-f60b-4003-b0b2-55cbada95967","Type":"ContainerStarted","Data":"757917fbeb483283529d91ee87aee9f570dbe061b41a92783e072e422354d0ba"} Oct 14 13:10:50.087784 master-1 kubenswrapper[4740]: I1014 13:10:50.087775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-j76rq" event={"ID":"b102298d-f60b-4003-b0b2-55cbada95967","Type":"ContainerStarted","Data":"7af90c9a732b018eeede1efb79848ba909530bdbd7ce3164b511f089e295ae02"} Oct 14 13:10:50.772051 master-1 kubenswrapper[4740]: I1014 13:10:50.771916 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:50.772051 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:50.772051 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:50.772051 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:50.772051 master-1 kubenswrapper[4740]: I1014 13:10:50.772043 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:51.771476 master-1 kubenswrapper[4740]: I1014 13:10:51.771386 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:51.771476 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:51.771476 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:51.771476 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:51.772685 master-1 kubenswrapper[4740]: I1014 13:10:51.771482 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:52.770830 master-1 kubenswrapper[4740]: I1014 13:10:52.770745 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:52.770830 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:52.770830 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:52.770830 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:52.771112 master-1 kubenswrapper[4740]: I1014 13:10:52.770878 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:53.771780 master-1 kubenswrapper[4740]: I1014 13:10:53.771704 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:53.771780 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:53.771780 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:53.771780 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:53.772587 master-1 kubenswrapper[4740]: I1014 13:10:53.771800 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:54.771145 master-1 kubenswrapper[4740]: I1014 13:10:54.771077 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:54.771145 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:54.771145 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:54.771145 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:54.771463 master-1 kubenswrapper[4740]: I1014 13:10:54.771155 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:55.549223 master-1 kubenswrapper[4740]: I1014 13:10:55.549166 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-retry-1-master-1_a804ef07-67ce-4467-abee-1fc22d6d528f/installer/0.log" Oct 14 13:10:55.550011 master-1 kubenswrapper[4740]: I1014 13:10:55.549267 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:55.578985 master-1 kubenswrapper[4740]: I1014 13:10:55.576483 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-j76rq" podStartSLOduration=7.576464001 podStartE2EDuration="7.576464001s" podCreationTimestamp="2025-10-14 13:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:10:50.15237796 +0000 UTC m=+275.962667409" watchObservedRunningTime="2025-10-14 13:10:55.576464001 +0000 UTC m=+281.386753330" Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592058 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-var-lock\") pod \"a804ef07-67ce-4467-abee-1fc22d6d528f\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592265 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-kubelet-dir\") pod \"a804ef07-67ce-4467-abee-1fc22d6d528f\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592303 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a804ef07-67ce-4467-abee-1fc22d6d528f-kube-api-access\") pod \"a804ef07-67ce-4467-abee-1fc22d6d528f\" (UID: \"a804ef07-67ce-4467-abee-1fc22d6d528f\") " Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592301 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-var-lock" (OuterVolumeSpecName: "var-lock") pod "a804ef07-67ce-4467-abee-1fc22d6d528f" (UID: "a804ef07-67ce-4467-abee-1fc22d6d528f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592407 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a804ef07-67ce-4467-abee-1fc22d6d528f" (UID: "a804ef07-67ce-4467-abee-1fc22d6d528f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592569 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:55.592862 master-1 kubenswrapper[4740]: I1014 13:10:55.592582 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a804ef07-67ce-4467-abee-1fc22d6d528f-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:55.605611 master-1 kubenswrapper[4740]: I1014 13:10:55.605529 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a804ef07-67ce-4467-abee-1fc22d6d528f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a804ef07-67ce-4467-abee-1fc22d6d528f" (UID: "a804ef07-67ce-4467-abee-1fc22d6d528f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:10:55.693838 master-1 kubenswrapper[4740]: I1014 13:10:55.693754 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a804ef07-67ce-4467-abee-1fc22d6d528f-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:10:55.771426 master-1 kubenswrapper[4740]: I1014 13:10:55.771208 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:55.771426 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:55.771426 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:55.771426 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:55.771426 master-1 kubenswrapper[4740]: I1014 13:10:55.771373 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:56.133157 master-1 kubenswrapper[4740]: I1014 13:10:56.132970 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-retry-1-master-1_a804ef07-67ce-4467-abee-1fc22d6d528f/installer/0.log" Oct 14 13:10:56.133157 master-1 kubenswrapper[4740]: I1014 13:10:56.133057 4740 generic.go:334] "Generic (PLEG): container finished" podID="a804ef07-67ce-4467-abee-1fc22d6d528f" containerID="92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7" exitCode=1 Oct 14 13:10:56.133157 master-1 kubenswrapper[4740]: I1014 13:10:56.133102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"a804ef07-67ce-4467-abee-1fc22d6d528f","Type":"ContainerDied","Data":"92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7"} Oct 14 13:10:56.133157 master-1 kubenswrapper[4740]: I1014 13:10:56.133147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" event={"ID":"a804ef07-67ce-4467-abee-1fc22d6d528f","Type":"ContainerDied","Data":"c1a76217927e7639386e63a8649f82b5eb1cb9c12cafe6985144fadd4993ce85"} Oct 14 13:10:56.133823 master-1 kubenswrapper[4740]: I1014 13:10:56.133180 4740 scope.go:117] "RemoveContainer" containerID="92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7" Oct 14 13:10:56.133823 master-1 kubenswrapper[4740]: I1014 13:10:56.133173 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-1" Oct 14 13:10:56.163435 master-1 kubenswrapper[4740]: I1014 13:10:56.163051 4740 scope.go:117] "RemoveContainer" containerID="92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7" Oct 14 13:10:56.163817 master-1 kubenswrapper[4740]: E1014 13:10:56.163744 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7\": container with ID starting with 92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7 not found: ID does not exist" containerID="92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7" Oct 14 13:10:56.163928 master-1 kubenswrapper[4740]: I1014 13:10:56.163810 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7"} err="failed to get container status \"92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7\": rpc error: code = NotFound desc = could not find container \"92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7\": container with ID starting with 92f5fa57ddf1c23bcff0975b4912a4e57c9938f7d4d355bc38473a18f88887c7 not found: ID does not exist" Oct 14 13:10:56.189413 master-1 kubenswrapper[4740]: I1014 13:10:56.189296 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 14 13:10:56.195471 master-1 kubenswrapper[4740]: I1014 13:10:56.195411 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-1"] Oct 14 13:10:56.770683 master-1 kubenswrapper[4740]: I1014 13:10:56.770603 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:56.770683 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:56.770683 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:56.770683 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:56.771795 master-1 kubenswrapper[4740]: I1014 13:10:56.770694 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:56.956113 master-1 kubenswrapper[4740]: I1014 13:10:56.956036 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a804ef07-67ce-4467-abee-1fc22d6d528f" path="/var/lib/kubelet/pods/a804ef07-67ce-4467-abee-1fc22d6d528f/volumes" Oct 14 13:10:57.808129 master-1 kubenswrapper[4740]: I1014 13:10:57.770136 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:57.808129 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:57.808129 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:57.808129 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:57.808129 master-1 kubenswrapper[4740]: I1014 13:10:57.770213 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:58.771779 master-1 kubenswrapper[4740]: I1014 13:10:58.771679 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:58.771779 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:58.771779 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:58.771779 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:58.771779 master-1 kubenswrapper[4740]: I1014 13:10:58.771771 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:10:59.771361 master-1 kubenswrapper[4740]: I1014 13:10:59.771264 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:10:59.771361 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:10:59.771361 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:10:59.771361 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:10:59.772350 master-1 kubenswrapper[4740]: I1014 13:10:59.771389 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:00.771036 master-1 kubenswrapper[4740]: I1014 13:11:00.770931 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:00.771036 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:00.771036 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:00.771036 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:00.771036 master-1 kubenswrapper[4740]: I1014 13:11:00.771017 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:01.640373 master-1 kubenswrapper[4740]: I1014 13:11:01.640316 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:11:01.641158 master-1 kubenswrapper[4740]: I1014 13:11:01.641106 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" containerID="cri-o://d3ed9cbb6f5f77f97002c046a3a9e3e350cee658f8b7fea03e390b2ecfd3b928" gracePeriod=30 Oct 14 13:11:01.641607 master-1 kubenswrapper[4740]: I1014 13:11:01.641538 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: E1014 13:11:01.641880 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="wait-for-host-port" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.641913 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="wait-for-host-port" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.641907 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" containerID="cri-o://9413841217e365c44535d9cbb2430590ab6343e3232163787d636ec31207723f" gracePeriod=30 Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: E1014 13:11:01.641942 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.641956 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: E1014 13:11:01.641975 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.642028 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: E1014 13:11:01.642046 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.642057 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: E1014 13:11:01.642076 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a804ef07-67ce-4467-abee-1fc22d6d528f" containerName="installer" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.642088 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a804ef07-67ce-4467-abee-1fc22d6d528f" containerName="installer" Oct 14 13:11:01.642081 master-1 kubenswrapper[4740]: I1014 13:11:01.642092 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" containerID="cri-o://8092a9e6ffee3c6072e897161e78ff3767262aeb08c415263028b74755398c8c" gracePeriod=30 Oct 14 13:11:01.642878 master-1 kubenswrapper[4740]: I1014 13:11:01.642295 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" Oct 14 13:11:01.642878 master-1 kubenswrapper[4740]: I1014 13:11:01.642321 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-cert-syncer" Oct 14 13:11:01.642878 master-1 kubenswrapper[4740]: I1014 13:11:01.642344 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a804ef07-67ce-4467-abee-1fc22d6d528f" containerName="installer" Oct 14 13:11:01.642878 master-1 kubenswrapper[4740]: I1014 13:11:01.642360 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler-recovery-controller" Oct 14 13:11:01.655887 master-1 kubenswrapper[4740]: I1014 13:11:01.655456 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-1 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": EOF" start-of-body= Oct 14 13:11:01.655887 master-1 kubenswrapper[4740]: I1014 13:11:01.655546 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="89fad8183e18ab3ad0c46d272335e5f8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": EOF" Oct 14 13:11:01.771793 master-1 kubenswrapper[4740]: I1014 13:11:01.771722 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:01.771793 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:01.771793 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:01.771793 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:01.772647 master-1 kubenswrapper[4740]: I1014 13:11:01.771797 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:01.786893 master-1 kubenswrapper[4740]: I1014 13:11:01.786826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.787035 master-1 kubenswrapper[4740]: I1014 13:11:01.786975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.828187 master-1 kubenswrapper[4740]: I1014 13:11:01.828083 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler-cert-syncer/0.log" Oct 14 13:11:01.829520 master-1 kubenswrapper[4740]: I1014 13:11:01.829466 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.835362 master-1 kubenswrapper[4740]: I1014 13:11:01.835301 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="89fad8183e18ab3ad0c46d272335e5f8" podUID="a61df698d34d049669621b2249bfe758" Oct 14 13:11:01.887867 master-1 kubenswrapper[4740]: I1014 13:11:01.887782 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") pod \"89fad8183e18ab3ad0c46d272335e5f8\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " Oct 14 13:11:01.888104 master-1 kubenswrapper[4740]: I1014 13:11:01.888017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") pod \"89fad8183e18ab3ad0c46d272335e5f8\" (UID: \"89fad8183e18ab3ad0c46d272335e5f8\") " Oct 14 13:11:01.888104 master-1 kubenswrapper[4740]: I1014 13:11:01.887998 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "89fad8183e18ab3ad0c46d272335e5f8" (UID: "89fad8183e18ab3ad0c46d272335e5f8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:01.888270 master-1 kubenswrapper[4740]: I1014 13:11:01.888196 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "89fad8183e18ab3ad0c46d272335e5f8" (UID: "89fad8183e18ab3ad0c46d272335e5f8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:01.888334 master-1 kubenswrapper[4740]: I1014 13:11:01.888286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.888405 master-1 kubenswrapper[4740]: I1014 13:11:01.888339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.888405 master-1 kubenswrapper[4740]: I1014 13:11:01.888379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.888526 master-1 kubenswrapper[4740]: I1014 13:11:01.888450 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"a61df698d34d049669621b2249bfe758\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:01.888658 master-1 kubenswrapper[4740]: I1014 13:11:01.888608 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:01.888658 master-1 kubenswrapper[4740]: I1014 13:11:01.888645 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/89fad8183e18ab3ad0c46d272335e5f8-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:02.181263 master-1 kubenswrapper[4740]: I1014 13:11:02.181169 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_89fad8183e18ab3ad0c46d272335e5f8/kube-scheduler-cert-syncer/0.log" Oct 14 13:11:02.183304 master-1 kubenswrapper[4740]: I1014 13:11:02.182489 4740 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="8092a9e6ffee3c6072e897161e78ff3767262aeb08c415263028b74755398c8c" exitCode=0 Oct 14 13:11:02.183304 master-1 kubenswrapper[4740]: I1014 13:11:02.182540 4740 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="9413841217e365c44535d9cbb2430590ab6343e3232163787d636ec31207723f" exitCode=2 Oct 14 13:11:02.183304 master-1 kubenswrapper[4740]: I1014 13:11:02.182560 4740 generic.go:334] "Generic (PLEG): container finished" podID="89fad8183e18ab3ad0c46d272335e5f8" containerID="d3ed9cbb6f5f77f97002c046a3a9e3e350cee658f8b7fea03e390b2ecfd3b928" exitCode=0 Oct 14 13:11:02.183304 master-1 kubenswrapper[4740]: I1014 13:11:02.182621 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:02.183304 master-1 kubenswrapper[4740]: I1014 13:11:02.182647 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e12a003bbc7c76772420501a711514135bd2c3eaf444c698e41ce4a3a777c0" Oct 14 13:11:02.186146 master-1 kubenswrapper[4740]: I1014 13:11:02.186034 4740 generic.go:334] "Generic (PLEG): container finished" podID="29394c7c-e1cf-4e8e-abef-d50e9466a5a6" containerID="4fd0324278c14bdc7968a2293eb7c15d589ae35f2214f59eeecf0fd590986edd" exitCode=0 Oct 14 13:11:02.186146 master-1 kubenswrapper[4740]: I1014 13:11:02.186111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"29394c7c-e1cf-4e8e-abef-d50e9466a5a6","Type":"ContainerDied","Data":"4fd0324278c14bdc7968a2293eb7c15d589ae35f2214f59eeecf0fd590986edd"} Oct 14 13:11:02.189526 master-1 kubenswrapper[4740]: I1014 13:11:02.189162 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="89fad8183e18ab3ad0c46d272335e5f8" podUID="a61df698d34d049669621b2249bfe758" Oct 14 13:11:02.216294 master-1 kubenswrapper[4740]: I1014 13:11:02.216198 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" oldPodUID="89fad8183e18ab3ad0c46d272335e5f8" podUID="a61df698d34d049669621b2249bfe758" Oct 14 13:11:02.440463 master-1 kubenswrapper[4740]: I1014 13:11:02.440211 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:02.440463 master-1 kubenswrapper[4740]: I1014 13:11:02.440378 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:02.771261 master-1 kubenswrapper[4740]: I1014 13:11:02.771107 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:02.771261 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:02.771261 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:02.771261 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:02.771261 master-1 kubenswrapper[4740]: I1014 13:11:02.771215 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:02.952506 master-1 kubenswrapper[4740]: I1014 13:11:02.952452 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fad8183e18ab3ad0c46d272335e5f8" path="/var/lib/kubelet/pods/89fad8183e18ab3ad0c46d272335e5f8/volumes" Oct 14 13:11:03.608537 master-1 kubenswrapper[4740]: I1014 13:11:03.608418 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:11:03.725403 master-1 kubenswrapper[4740]: I1014 13:11:03.725107 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-var-lock\") pod \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " Oct 14 13:11:03.725403 master-1 kubenswrapper[4740]: I1014 13:11:03.725282 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-var-lock" (OuterVolumeSpecName: "var-lock") pod "29394c7c-e1cf-4e8e-abef-d50e9466a5a6" (UID: "29394c7c-e1cf-4e8e-abef-d50e9466a5a6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:03.725403 master-1 kubenswrapper[4740]: I1014 13:11:03.725321 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kubelet-dir\") pod \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " Oct 14 13:11:03.725403 master-1 kubenswrapper[4740]: I1014 13:11:03.725377 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kube-api-access\") pod \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\" (UID: \"29394c7c-e1cf-4e8e-abef-d50e9466a5a6\") " Oct 14 13:11:03.725403 master-1 kubenswrapper[4740]: I1014 13:11:03.725421 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "29394c7c-e1cf-4e8e-abef-d50e9466a5a6" (UID: "29394c7c-e1cf-4e8e-abef-d50e9466a5a6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:03.725993 master-1 kubenswrapper[4740]: I1014 13:11:03.725810 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:03.725993 master-1 kubenswrapper[4740]: I1014 13:11:03.725835 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:03.728146 master-1 kubenswrapper[4740]: I1014 13:11:03.728098 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "29394c7c-e1cf-4e8e-abef-d50e9466a5a6" (UID: "29394c7c-e1cf-4e8e-abef-d50e9466a5a6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:11:03.771376 master-1 kubenswrapper[4740]: I1014 13:11:03.771144 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:03.771376 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:03.771376 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:03.771376 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:03.771770 master-1 kubenswrapper[4740]: I1014 13:11:03.771404 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:03.827487 master-1 kubenswrapper[4740]: I1014 13:11:03.827335 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29394c7c-e1cf-4e8e-abef-d50e9466a5a6-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:04.202929 master-1 kubenswrapper[4740]: I1014 13:11:04.202720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-1" event={"ID":"29394c7c-e1cf-4e8e-abef-d50e9466a5a6","Type":"ContainerDied","Data":"c3f717cff0e9cb77bfb2ca206186d5dc520cb695f90da457c9152dec6b196854"} Oct 14 13:11:04.202929 master-1 kubenswrapper[4740]: I1014 13:11:04.202780 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f717cff0e9cb77bfb2ca206186d5dc520cb695f90da457c9152dec6b196854" Oct 14 13:11:04.202929 master-1 kubenswrapper[4740]: I1014 13:11:04.202864 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-1" Oct 14 13:11:04.771408 master-1 kubenswrapper[4740]: I1014 13:11:04.771321 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:04.771408 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:04.771408 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:04.771408 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:04.771796 master-1 kubenswrapper[4740]: I1014 13:11:04.771441 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:05.771601 master-1 kubenswrapper[4740]: I1014 13:11:05.771497 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:05.771601 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:05.771601 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:05.771601 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:05.771601 master-1 kubenswrapper[4740]: I1014 13:11:05.771577 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:06.772045 master-1 kubenswrapper[4740]: I1014 13:11:06.771941 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:06.772045 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:06.772045 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:06.772045 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:06.772803 master-1 kubenswrapper[4740]: I1014 13:11:06.772043 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:07.440550 master-1 kubenswrapper[4740]: I1014 13:11:07.440418 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:07.440550 master-1 kubenswrapper[4740]: I1014 13:11:07.440528 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:07.771580 master-1 kubenswrapper[4740]: I1014 13:11:07.771488 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:07.771580 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:07.771580 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:07.771580 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:07.771929 master-1 kubenswrapper[4740]: I1014 13:11:07.771599 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:08.770995 master-1 kubenswrapper[4740]: I1014 13:11:08.770910 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:08.770995 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:08.770995 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:08.770995 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:08.771992 master-1 kubenswrapper[4740]: I1014 13:11:08.771000 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:09.307000 master-1 kubenswrapper[4740]: I1014 13:11:09.306773 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-c57444595-zs4m8"] Oct 14 13:11:09.307405 master-1 kubenswrapper[4740]: I1014 13:11:09.307194 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" containerID="cri-o://194ea90143b4d79876e5b96800a908311ed2f6a1f27daf72bfecc0523fd85c7f" gracePeriod=120 Oct 14 13:11:09.771125 master-1 kubenswrapper[4740]: I1014 13:11:09.771037 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:09.771125 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:09.771125 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:09.771125 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:09.771961 master-1 kubenswrapper[4740]: I1014 13:11:09.771134 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:10.108387 master-1 kubenswrapper[4740]: I1014 13:11:10.108211 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:11:10.770771 master-1 kubenswrapper[4740]: I1014 13:11:10.770653 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:10.770771 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:10.770771 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:10.770771 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:10.770771 master-1 kubenswrapper[4740]: I1014 13:11:10.770764 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:11.770783 master-1 kubenswrapper[4740]: I1014 13:11:11.770718 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:11.770783 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:11.770783 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:11.770783 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:11.771737 master-1 kubenswrapper[4740]: I1014 13:11:11.770794 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:12.441123 master-1 kubenswrapper[4740]: I1014 13:11:12.440992 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:12.441527 master-1 kubenswrapper[4740]: I1014 13:11:12.441133 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:12.441527 master-1 kubenswrapper[4740]: I1014 13:11:12.441299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:11:12.442318 master-1 kubenswrapper[4740]: I1014 13:11:12.442209 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:12.442471 master-1 kubenswrapper[4740]: I1014 13:11:12.442315 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:12.771629 master-1 kubenswrapper[4740]: I1014 13:11:12.771533 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:12.771629 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:12.771629 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:12.771629 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:12.771629 master-1 kubenswrapper[4740]: I1014 13:11:12.771621 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:13.127214 master-1 kubenswrapper[4740]: I1014 13:11:13.127042 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:11:13.127746 master-1 kubenswrapper[4740]: E1014 13:11:13.127430 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29394c7c-e1cf-4e8e-abef-d50e9466a5a6" containerName="installer" Oct 14 13:11:13.127746 master-1 kubenswrapper[4740]: I1014 13:11:13.127449 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="29394c7c-e1cf-4e8e-abef-d50e9466a5a6" containerName="installer" Oct 14 13:11:13.131171 master-1 kubenswrapper[4740]: I1014 13:11:13.127594 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="29394c7c-e1cf-4e8e-abef-d50e9466a5a6" containerName="installer" Oct 14 13:11:13.143542 master-1 kubenswrapper[4740]: I1014 13:11:13.143451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.182842 master-1 kubenswrapper[4740]: I1014 13:11:13.182745 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:11:13.281094 master-1 kubenswrapper[4740]: I1014 13:11:13.281038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.281094 master-1 kubenswrapper[4740]: I1014 13:11:13.281094 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.281377 master-1 kubenswrapper[4740]: I1014 13:11:13.281112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.383598 master-1 kubenswrapper[4740]: I1014 13:11:13.383361 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.383598 master-1 kubenswrapper[4740]: I1014 13:11:13.383492 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.383598 master-1 kubenswrapper[4740]: I1014 13:11:13.383544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.383598 master-1 kubenswrapper[4740]: I1014 13:11:13.383553 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.384321 master-1 kubenswrapper[4740]: I1014 13:11:13.383689 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.384321 master-1 kubenswrapper[4740]: I1014 13:11:13.383762 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.478097 master-1 kubenswrapper[4740]: I1014 13:11:13.477947 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:13.771681 master-1 kubenswrapper[4740]: I1014 13:11:13.771608 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:13.771681 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:13.771681 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:13.771681 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:13.772303 master-1 kubenswrapper[4740]: I1014 13:11:13.771708 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: I1014 13:11:14.035161 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:14.035267 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:14.036391 master-1 kubenswrapper[4740]: I1014 13:11:14.035286 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:14.272645 master-1 kubenswrapper[4740]: I1014 13:11:14.272579 4740 generic.go:334] "Generic (PLEG): container finished" podID="946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" containerID="97abe0d8c7e85255ddcf3f08db5d8fadc02560d6e693cb64ea478661abddbf69" exitCode=0 Oct 14 13:11:14.273128 master-1 kubenswrapper[4740]: I1014 13:11:14.272706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9","Type":"ContainerDied","Data":"97abe0d8c7e85255ddcf3f08db5d8fadc02560d6e693cb64ea478661abddbf69"} Oct 14 13:11:14.275880 master-1 kubenswrapper[4740]: I1014 13:11:14.275822 4740 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0" exitCode=0 Oct 14 13:11:14.276016 master-1 kubenswrapper[4740]: I1014 13:11:14.275881 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerDied","Data":"2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0"} Oct 14 13:11:14.276016 master-1 kubenswrapper[4740]: I1014 13:11:14.275920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"5f604d117b51de3c703d559c5b584173cde3c1aa2241f4d9dba1bb5cbf54ba44"} Oct 14 13:11:14.772999 master-1 kubenswrapper[4740]: I1014 13:11:14.772903 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:14.772999 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:14.772999 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:14.772999 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:14.772999 master-1 kubenswrapper[4740]: I1014 13:11:14.772966 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:14.861300 master-1 kubenswrapper[4740]: I1014 13:11:14.861116 4740 kubelet.go:1505] "Image garbage collection succeeded" Oct 14 13:11:14.943299 master-1 kubenswrapper[4740]: I1014 13:11:14.942950 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:14.981535 master-1 kubenswrapper[4740]: I1014 13:11:14.981478 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="77e4b8fd-0184-472e-a45b-f2fa65938919" Oct 14 13:11:14.981535 master-1 kubenswrapper[4740]: I1014 13:11:14.981530 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="77e4b8fd-0184-472e-a45b-f2fa65938919" Oct 14 13:11:14.997956 master-1 kubenswrapper[4740]: I1014 13:11:14.997917 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:11:15.002093 master-1 kubenswrapper[4740]: I1014 13:11:15.002034 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:15.008361 master-1 kubenswrapper[4740]: I1014 13:11:15.008317 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:11:15.024906 master-1 kubenswrapper[4740]: I1014 13:11:15.024845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:11:15.030528 master-1 kubenswrapper[4740]: I1014 13:11:15.030489 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:11:15.285585 master-1 kubenswrapper[4740]: I1014 13:11:15.285462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac"} Oct 14 13:11:15.285585 master-1 kubenswrapper[4740]: I1014 13:11:15.285523 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64"} Oct 14 13:11:15.285585 master-1 kubenswrapper[4740]: I1014 13:11:15.285542 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00"} Oct 14 13:11:15.287160 master-1 kubenswrapper[4740]: I1014 13:11:15.287120 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"1cf141079f9748454ec19ec0db69cd859eba31f8cbfe7a61434ebcb0f25e4ba5"} Oct 14 13:11:15.588351 master-1 kubenswrapper[4740]: I1014 13:11:15.588283 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:11:15.728591 master-1 kubenswrapper[4740]: I1014 13:11:15.724992 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kubelet-dir\") pod \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " Oct 14 13:11:15.728591 master-1 kubenswrapper[4740]: I1014 13:11:15.725130 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-var-lock\") pod \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " Oct 14 13:11:15.728591 master-1 kubenswrapper[4740]: I1014 13:11:15.725220 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kube-api-access\") pod \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\" (UID: \"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9\") " Oct 14 13:11:15.728591 master-1 kubenswrapper[4740]: I1014 13:11:15.727906 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" (UID: "946295a4-6f1e-44dd-a7f4-ab062bf3f4b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:15.728591 master-1 kubenswrapper[4740]: I1014 13:11:15.727971 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" (UID: "946295a4-6f1e-44dd-a7f4-ab062bf3f4b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:15.770737 master-1 kubenswrapper[4740]: I1014 13:11:15.770670 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:15.770737 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:15.770737 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:15.770737 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:15.771031 master-1 kubenswrapper[4740]: I1014 13:11:15.770762 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:15.827411 master-1 kubenswrapper[4740]: I1014 13:11:15.827356 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:15.827411 master-1 kubenswrapper[4740]: I1014 13:11:15.827394 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:16.021029 master-1 kubenswrapper[4740]: I1014 13:11:16.020938 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" (UID: "946295a4-6f1e-44dd-a7f4-ab062bf3f4b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:11:16.030663 master-1 kubenswrapper[4740]: I1014 13:11:16.030615 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:16.294781 master-1 kubenswrapper[4740]: I1014 13:11:16.294642 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-1" event={"ID":"946295a4-6f1e-44dd-a7f4-ab062bf3f4b9","Type":"ContainerDied","Data":"cf093794044825a5f5c57160c7400f9bc5cf0ec0224001d1c365593bee764872"} Oct 14 13:11:16.294781 master-1 kubenswrapper[4740]: I1014 13:11:16.294703 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf093794044825a5f5c57160c7400f9bc5cf0ec0224001d1c365593bee764872" Oct 14 13:11:16.294781 master-1 kubenswrapper[4740]: I1014 13:11:16.294784 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-1" Oct 14 13:11:16.314910 master-1 kubenswrapper[4740]: I1014 13:11:16.314801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641"} Oct 14 13:11:16.314910 master-1 kubenswrapper[4740]: I1014 13:11:16.314872 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"34b1362996d1e0c2cea0bee73eb18468","Type":"ContainerStarted","Data":"1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41"} Oct 14 13:11:16.315434 master-1 kubenswrapper[4740]: I1014 13:11:16.315400 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:16.317586 master-1 kubenswrapper[4740]: I1014 13:11:16.317539 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03"} Oct 14 13:11:16.345689 master-1 kubenswrapper[4740]: I1014 13:11:16.345576 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=3.345551805 podStartE2EDuration="3.345551805s" podCreationTimestamp="2025-10-14 13:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:11:16.340513329 +0000 UTC m=+302.150802678" watchObservedRunningTime="2025-10-14 13:11:16.345551805 +0000 UTC m=+302.155841164" Oct 14 13:11:16.771314 master-1 kubenswrapper[4740]: I1014 13:11:16.771214 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:16.771314 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:16.771314 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:16.771314 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:16.771752 master-1 kubenswrapper[4740]: I1014 13:11:16.771348 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:17.441009 master-1 kubenswrapper[4740]: I1014 13:11:17.440924 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:17.441009 master-1 kubenswrapper[4740]: I1014 13:11:17.441010 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:17.771008 master-1 kubenswrapper[4740]: I1014 13:11:17.770908 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:17.771008 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:17.771008 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:17.771008 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:17.771467 master-1 kubenswrapper[4740]: I1014 13:11:17.771027 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:18.478701 master-1 kubenswrapper[4740]: I1014 13:11:18.478608 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:18.478701 master-1 kubenswrapper[4740]: I1014 13:11:18.478692 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:18.488352 master-1 kubenswrapper[4740]: I1014 13:11:18.488307 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:18.600791 master-1 kubenswrapper[4740]: I1014 13:11:18.600681 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 14 13:11:18.601115 master-1 kubenswrapper[4740]: E1014 13:11:18.601074 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" containerName="installer" Oct 14 13:11:18.601115 master-1 kubenswrapper[4740]: I1014 13:11:18.601100 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" containerName="installer" Oct 14 13:11:18.601535 master-1 kubenswrapper[4740]: I1014 13:11:18.601303 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" containerName="installer" Oct 14 13:11:18.601970 master-1 kubenswrapper[4740]: I1014 13:11:18.601926 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:18.605270 master-1 kubenswrapper[4740]: I1014 13:11:18.605175 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"openshift-service-ca.crt" Oct 14 13:11:18.606351 master-1 kubenswrapper[4740]: I1014 13:11:18.606284 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 14 13:11:18.616654 master-1 kubenswrapper[4740]: I1014 13:11:18.616582 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 14 13:11:18.671682 master-1 kubenswrapper[4740]: I1014 13:11:18.671603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhz7w\" (UniqueName: \"kubernetes.io/projected/0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56-kube-api-access-fhz7w\") pod \"kube-apiserver-guard-master-1\" (UID: \"0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:18.771011 master-1 kubenswrapper[4740]: I1014 13:11:18.770939 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:18.771011 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:18.771011 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:18.771011 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:18.771405 master-1 kubenswrapper[4740]: I1014 13:11:18.771025 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:18.772528 master-1 kubenswrapper[4740]: I1014 13:11:18.772479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhz7w\" (UniqueName: \"kubernetes.io/projected/0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56-kube-api-access-fhz7w\") pod \"kube-apiserver-guard-master-1\" (UID: \"0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:18.805392 master-1 kubenswrapper[4740]: I1014 13:11:18.805311 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhz7w\" (UniqueName: \"kubernetes.io/projected/0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56-kube-api-access-fhz7w\") pod \"kube-apiserver-guard-master-1\" (UID: \"0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56\") " pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:18.955614 master-1 kubenswrapper[4740]: I1014 13:11:18.955562 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: I1014 13:11:19.034830 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:19.035054 master-1 kubenswrapper[4740]: I1014 13:11:19.034919 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:19.344608 master-1 kubenswrapper[4740]: I1014 13:11:19.343925 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:19.437192 master-1 kubenswrapper[4740]: I1014 13:11:19.436590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 14 13:11:19.771493 master-1 kubenswrapper[4740]: I1014 13:11:19.771430 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:19.771493 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:19.771493 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:19.771493 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:19.772497 master-1 kubenswrapper[4740]: I1014 13:11:19.772412 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:20.359279 master-1 kubenswrapper[4740]: I1014 13:11:20.359092 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" event={"ID":"0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56","Type":"ContainerStarted","Data":"c7ae531a8f27ec7b4c4fef9dcf28294638126ef0e1ecbba3c0009cb985efe4bd"} Oct 14 13:11:20.359279 master-1 kubenswrapper[4740]: I1014 13:11:20.359221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" event={"ID":"0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56","Type":"ContainerStarted","Data":"4f8cee4ed6770e2cc07664302c581566d9b7d454bf2cc643ba4453d2bae71e65"} Oct 14 13:11:20.359824 master-1 kubenswrapper[4740]: I1014 13:11:20.359732 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:20.367316 master-1 kubenswrapper[4740]: I1014 13:11:20.367200 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:11:20.387952 master-1 kubenswrapper[4740]: I1014 13:11:20.387826 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podStartSLOduration=2.387799992 podStartE2EDuration="2.387799992s" podCreationTimestamp="2025-10-14 13:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:11:20.386784981 +0000 UTC m=+306.197074350" watchObservedRunningTime="2025-10-14 13:11:20.387799992 +0000 UTC m=+306.198089351" Oct 14 13:11:20.771322 master-1 kubenswrapper[4740]: I1014 13:11:20.771215 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:20.771322 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:20.771322 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:20.771322 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:20.772398 master-1 kubenswrapper[4740]: I1014 13:11:20.771344 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:21.771333 master-1 kubenswrapper[4740]: I1014 13:11:21.771184 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:21.771333 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:21.771333 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:21.771333 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:21.772414 master-1 kubenswrapper[4740]: I1014 13:11:21.771364 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:22.440745 master-1 kubenswrapper[4740]: I1014 13:11:22.440633 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:22.440745 master-1 kubenswrapper[4740]: I1014 13:11:22.440724 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:22.772025 master-1 kubenswrapper[4740]: I1014 13:11:22.771935 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:22.772025 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:22.772025 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:22.772025 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:22.773129 master-1 kubenswrapper[4740]: I1014 13:11:22.772028 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:23.599552 master-1 kubenswrapper[4740]: I1014 13:11:23.599409 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-guard-master-1"] Oct 14 13:11:23.771398 master-1 kubenswrapper[4740]: I1014 13:11:23.771282 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:23.771398 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:23.771398 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:23.771398 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:23.771993 master-1 kubenswrapper[4740]: I1014 13:11:23.771426 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: I1014 13:11:24.034798 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:24.034893 master-1 kubenswrapper[4740]: I1014 13:11:24.034893 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:24.036557 master-1 kubenswrapper[4740]: I1014 13:11:24.035006 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:11:24.771389 master-1 kubenswrapper[4740]: I1014 13:11:24.771289 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:24.771389 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:24.771389 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:24.771389 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:24.771972 master-1 kubenswrapper[4740]: I1014 13:11:24.771400 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:25.771594 master-1 kubenswrapper[4740]: I1014 13:11:25.771493 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:25.771594 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:25.771594 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:25.771594 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:25.771594 master-1 kubenswrapper[4740]: I1014 13:11:25.771590 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:26.770775 master-1 kubenswrapper[4740]: I1014 13:11:26.770679 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:26.770775 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:26.770775 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:26.770775 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:26.770775 master-1 kubenswrapper[4740]: I1014 13:11:26.770758 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:27.440896 master-1 kubenswrapper[4740]: I1014 13:11:27.440831 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:27.441735 master-1 kubenswrapper[4740]: I1014 13:11:27.440936 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:27.771097 master-1 kubenswrapper[4740]: I1014 13:11:27.771043 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:27.771097 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:27.771097 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:27.771097 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:27.771585 master-1 kubenswrapper[4740]: I1014 13:11:27.771553 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:28.771599 master-1 kubenswrapper[4740]: I1014 13:11:28.771486 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:28.771599 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:28.771599 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:28.771599 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:28.771599 master-1 kubenswrapper[4740]: I1014 13:11:28.771584 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: I1014 13:11:29.034477 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:29.034687 master-1 kubenswrapper[4740]: I1014 13:11:29.034609 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:29.771901 master-1 kubenswrapper[4740]: I1014 13:11:29.771805 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:11:29.771901 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:11:29.771901 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:11:29.771901 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:11:29.771901 master-1 kubenswrapper[4740]: I1014 13:11:29.771900 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:29.772791 master-1 kubenswrapper[4740]: I1014 13:11:29.771980 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:11:29.772982 master-1 kubenswrapper[4740]: I1014 13:11:29.772928 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"57f4d6aac1f3c80fb4d6e8a8343432ff9667911716e629d1c9aa8b443a819f98"} pod="openshift-ingress/router-default-5ddb89f76-xf924" containerMessage="Container router failed startup probe, will be restarted" Oct 14 13:11:29.773038 master-1 kubenswrapper[4740]: I1014 13:11:29.772997 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" containerID="cri-o://57f4d6aac1f3c80fb4d6e8a8343432ff9667911716e629d1c9aa8b443a819f98" gracePeriod=3600 Oct 14 13:11:31.525288 master-1 kubenswrapper[4740]: I1014 13:11:31.525134 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:11:31.527655 master-1 kubenswrapper[4740]: I1014 13:11:31.527570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.579206 master-1 kubenswrapper[4740]: I1014 13:11:31.579102 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:11:31.682827 master-1 kubenswrapper[4740]: I1014 13:11:31.682747 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.683144 master-1 kubenswrapper[4740]: I1014 13:11:31.682896 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.784757 master-1 kubenswrapper[4740]: I1014 13:11:31.784594 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.784757 master-1 kubenswrapper[4740]: I1014 13:11:31.784755 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.784958 master-1 kubenswrapper[4740]: I1014 13:11:31.784821 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.784958 master-1 kubenswrapper[4740]: I1014 13:11:31.784885 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.871816 master-1 kubenswrapper[4740]: I1014 13:11:31.871727 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:31.908033 master-1 kubenswrapper[4740]: W1014 13:11:31.907936 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod307e6b842bfe51f420cddfc39289bc3c.slice/crio-b67caa3ed969288705757561d3901f7a1269b03a91cc391c1fedbca5e3e2c36a WatchSource:0}: Error finding container b67caa3ed969288705757561d3901f7a1269b03a91cc391c1fedbca5e3e2c36a: Status 404 returned error can't find the container with id b67caa3ed969288705757561d3901f7a1269b03a91cc391c1fedbca5e3e2c36a Oct 14 13:11:32.237469 master-1 kubenswrapper[4740]: I1014 13:11:32.237408 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:11:32.440140 master-1 kubenswrapper[4740]: I1014 13:11:32.439942 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:32.440140 master-1 kubenswrapper[4740]: I1014 13:11:32.440021 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:32.453355 master-1 kubenswrapper[4740]: I1014 13:11:32.453285 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerStarted","Data":"3f0bc4dbe3b6e7ad165b03d3b977fbdd2911734cf101d9169ff05b295df5788b"} Oct 14 13:11:32.453355 master-1 kubenswrapper[4740]: I1014 13:11:32.453356 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerStarted","Data":"b67caa3ed969288705757561d3901f7a1269b03a91cc391c1fedbca5e3e2c36a"} Oct 14 13:11:32.455940 master-1 kubenswrapper[4740]: I1014 13:11:32.455842 4740 generic.go:334] "Generic (PLEG): container finished" podID="32fb33a3-6da2-4d25-b5e9-799604d68cc9" containerID="cf6d422071d841561c6f78de1237a1026480edd8f4b4fe6ca27ad710e5faa5ea" exitCode=0 Oct 14 13:11:32.456024 master-1 kubenswrapper[4740]: I1014 13:11:32.455944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"32fb33a3-6da2-4d25-b5e9-799604d68cc9","Type":"ContainerDied","Data":"cf6d422071d841561c6f78de1237a1026480edd8f4b4fe6ca27ad710e5faa5ea"} Oct 14 13:11:33.484513 master-1 kubenswrapper[4740]: I1014 13:11:33.484444 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:11:33.811776 master-1 kubenswrapper[4740]: I1014 13:11:33.811707 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:11:33.924011 master-1 kubenswrapper[4740]: I1014 13:11:33.923929 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-var-lock\") pod \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " Oct 14 13:11:33.924523 master-1 kubenswrapper[4740]: I1014 13:11:33.924069 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kube-api-access\") pod \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " Oct 14 13:11:33.924523 master-1 kubenswrapper[4740]: I1014 13:11:33.924121 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kubelet-dir\") pod \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\" (UID: \"32fb33a3-6da2-4d25-b5e9-799604d68cc9\") " Oct 14 13:11:33.924523 master-1 kubenswrapper[4740]: I1014 13:11:33.924099 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "32fb33a3-6da2-4d25-b5e9-799604d68cc9" (UID: "32fb33a3-6da2-4d25-b5e9-799604d68cc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:33.924523 master-1 kubenswrapper[4740]: I1014 13:11:33.924265 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "32fb33a3-6da2-4d25-b5e9-799604d68cc9" (UID: "32fb33a3-6da2-4d25-b5e9-799604d68cc9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:11:33.924820 master-1 kubenswrapper[4740]: I1014 13:11:33.924788 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:33.924820 master-1 kubenswrapper[4740]: I1014 13:11:33.924813 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:33.928247 master-1 kubenswrapper[4740]: I1014 13:11:33.928151 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "32fb33a3-6da2-4d25-b5e9-799604d68cc9" (UID: "32fb33a3-6da2-4d25-b5e9-799604d68cc9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:11:34.026771 master-1 kubenswrapper[4740]: I1014 13:11:34.026565 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32fb33a3-6da2-4d25-b5e9-799604d68cc9-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: I1014 13:11:34.031730 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:34.031800 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:34.032297 master-1 kubenswrapper[4740]: I1014 13:11:34.031837 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:34.471943 master-1 kubenswrapper[4740]: I1014 13:11:34.471703 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-1" event={"ID":"32fb33a3-6da2-4d25-b5e9-799604d68cc9","Type":"ContainerDied","Data":"a03fb1716c8fdb868d08d31a7ef51262391ab20de3ba20469a09d35614541922"} Oct 14 13:11:34.471943 master-1 kubenswrapper[4740]: I1014 13:11:34.471776 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a03fb1716c8fdb868d08d31a7ef51262391ab20de3ba20469a09d35614541922" Oct 14 13:11:34.471943 master-1 kubenswrapper[4740]: I1014 13:11:34.471862 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-1" Oct 14 13:11:35.481042 master-1 kubenswrapper[4740]: I1014 13:11:35.480993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerStarted","Data":"8f7f6048dbdc1a310a3e5e5e10294d23b83452d6cb4d457ef27b2ca284c65673"} Oct 14 13:11:36.385706 master-1 kubenswrapper[4740]: I1014 13:11:36.385629 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 14 13:11:36.385993 master-1 kubenswrapper[4740]: E1014 13:11:36.385885 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fb33a3-6da2-4d25-b5e9-799604d68cc9" containerName="installer" Oct 14 13:11:36.385993 master-1 kubenswrapper[4740]: I1014 13:11:36.385900 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fb33a3-6da2-4d25-b5e9-799604d68cc9" containerName="installer" Oct 14 13:11:36.386131 master-1 kubenswrapper[4740]: I1014 13:11:36.386048 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fb33a3-6da2-4d25-b5e9-799604d68cc9" containerName="installer" Oct 14 13:11:36.386578 master-1 kubenswrapper[4740]: I1014 13:11:36.386552 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:36.389553 master-1 kubenswrapper[4740]: I1014 13:11:36.389464 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"openshift-service-ca.crt" Oct 14 13:11:36.389804 master-1 kubenswrapper[4740]: I1014 13:11:36.389736 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 13:11:36.400173 master-1 kubenswrapper[4740]: I1014 13:11:36.400111 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 14 13:11:36.488825 master-1 kubenswrapper[4740]: I1014 13:11:36.488790 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerStarted","Data":"1454c7db3bd11bf75bea8fa684ae07789621749144f0ddb7b02fe3b66731d7cd"} Oct 14 13:11:36.489362 master-1 kubenswrapper[4740]: I1014 13:11:36.489345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerStarted","Data":"e25a090fbeaf10ae15d12c1a5a4fc4c7f9e4949adb35ef26373fca7108a10da2"} Oct 14 13:11:36.517111 master-1 kubenswrapper[4740]: I1014 13:11:36.516985 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podStartSLOduration=5.516952382 podStartE2EDuration="5.516952382s" podCreationTimestamp="2025-10-14 13:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:11:36.511060884 +0000 UTC m=+322.321350213" watchObservedRunningTime="2025-10-14 13:11:36.516952382 +0000 UTC m=+322.327241751" Oct 14 13:11:36.562917 master-1 kubenswrapper[4740]: I1014 13:11:36.562829 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8njc\" (UniqueName: \"kubernetes.io/projected/87a988d8-ed78-4396-a4fa-d856ff93860f-kube-api-access-s8njc\") pod \"kube-controller-manager-guard-master-1\" (UID: \"87a988d8-ed78-4396-a4fa-d856ff93860f\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:36.664375 master-1 kubenswrapper[4740]: I1014 13:11:36.664169 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8njc\" (UniqueName: \"kubernetes.io/projected/87a988d8-ed78-4396-a4fa-d856ff93860f-kube-api-access-s8njc\") pod \"kube-controller-manager-guard-master-1\" (UID: \"87a988d8-ed78-4396-a4fa-d856ff93860f\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:36.685566 master-1 kubenswrapper[4740]: I1014 13:11:36.685492 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8njc\" (UniqueName: \"kubernetes.io/projected/87a988d8-ed78-4396-a4fa-d856ff93860f-kube-api-access-s8njc\") pod \"kube-controller-manager-guard-master-1\" (UID: \"87a988d8-ed78-4396-a4fa-d856ff93860f\") " pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:36.705826 master-1 kubenswrapper[4740]: I1014 13:11:36.705732 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:37.016388 master-1 kubenswrapper[4740]: I1014 13:11:37.016339 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 14 13:11:37.443658 master-1 kubenswrapper[4740]: I1014 13:11:37.443587 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:37.445000 master-1 kubenswrapper[4740]: I1014 13:11:37.444941 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:37.498954 master-1 kubenswrapper[4740]: I1014 13:11:37.498845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" event={"ID":"87a988d8-ed78-4396-a4fa-d856ff93860f","Type":"ContainerStarted","Data":"4b1f7acf82faf9bdcca0dde5b3f12921d31e7d8bdd27d27770a8f5e998402643"} Oct 14 13:11:37.498954 master-1 kubenswrapper[4740]: I1014 13:11:37.498954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" event={"ID":"87a988d8-ed78-4396-a4fa-d856ff93860f","Type":"ContainerStarted","Data":"89582cb36d460bf9ccc1fabc5c18cdfd521ef9a226b91e8cf678ea56dc214c4d"} Oct 14 13:11:37.522790 master-1 kubenswrapper[4740]: I1014 13:11:37.522710 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podStartSLOduration=1.522689271 podStartE2EDuration="1.522689271s" podCreationTimestamp="2025-10-14 13:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:11:37.521694541 +0000 UTC m=+323.331983910" watchObservedRunningTime="2025-10-14 13:11:37.522689271 +0000 UTC m=+323.332978610" Oct 14 13:11:38.508880 master-1 kubenswrapper[4740]: I1014 13:11:38.508815 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:38.514767 master-1 kubenswrapper[4740]: I1014 13:11:38.514718 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: I1014 13:11:39.034053 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:39.034159 master-1 kubenswrapper[4740]: I1014 13:11:39.034135 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:40.102720 master-1 kubenswrapper[4740]: I1014 13:11:40.102642 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:11:40.793918 master-1 kubenswrapper[4740]: I1014 13:11:40.793749 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-guard-master-1"] Oct 14 13:11:41.872302 master-1 kubenswrapper[4740]: I1014 13:11:41.872239 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:41.872302 master-1 kubenswrapper[4740]: I1014 13:11:41.872293 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:41.872302 master-1 kubenswrapper[4740]: I1014 13:11:41.872303 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:41.873326 master-1 kubenswrapper[4740]: I1014 13:11:41.872604 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:41.877179 master-1 kubenswrapper[4740]: I1014 13:11:41.877143 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:41.879623 master-1 kubenswrapper[4740]: I1014 13:11:41.879583 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:42.440104 master-1 kubenswrapper[4740]: I1014 13:11:42.440042 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:42.440637 master-1 kubenswrapper[4740]: I1014 13:11:42.440580 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:42.540442 master-1 kubenswrapper[4740]: I1014 13:11:42.540322 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: I1014 13:11:44.033697 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:44.033788 master-1 kubenswrapper[4740]: I1014 13:11:44.033782 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:45.560460 master-1 kubenswrapper[4740]: I1014 13:11:45.560269 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/wait-for-host-port/0.log" Oct 14 13:11:45.560460 master-1 kubenswrapper[4740]: I1014 13:11:45.560370 4740 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03" exitCode=124 Oct 14 13:11:45.560460 master-1 kubenswrapper[4740]: I1014 13:11:45.560431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerDied","Data":"c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03"} Oct 14 13:11:46.570475 master-1 kubenswrapper[4740]: I1014 13:11:46.570377 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/wait-for-host-port/0.log" Oct 14 13:11:46.570475 master-1 kubenswrapper[4740]: I1014 13:11:46.570475 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"8cf8d336358e5e89ddb3d21d4fac5892909c3f2b88f04a63d122268437bd6a7a"} Oct 14 13:11:47.441255 master-1 kubenswrapper[4740]: I1014 13:11:47.441138 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:47.441518 master-1 kubenswrapper[4740]: I1014 13:11:47.441327 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: I1014 13:11:49.032669 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:49.032817 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:49.034337 master-1 kubenswrapper[4740]: I1014 13:11:49.032861 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:50.605661 master-1 kubenswrapper[4740]: I1014 13:11:50.605526 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6576f6bc9d-xfzjr"] Oct 14 13:11:50.607049 master-1 kubenswrapper[4740]: I1014 13:11:50.605882 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" containerID="cri-o://2a4c2ed2bbbd4797e6180de90b1ee5e438d370126f0614ca02705325ec43d7bf" gracePeriod=120 Oct 14 13:11:50.607049 master-1 kubenswrapper[4740]: I1014 13:11:50.606342 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://12d72bb9d4324b183104d8033fbb4b64412be63d92c608ad75fd099e5f63f4a7" gracePeriod=120 Oct 14 13:11:51.605981 master-1 kubenswrapper[4740]: I1014 13:11:51.605870 4740 generic.go:334] "Generic (PLEG): container finished" podID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerID="12d72bb9d4324b183104d8033fbb4b64412be63d92c608ad75fd099e5f63f4a7" exitCode=0 Oct 14 13:11:51.605981 master-1 kubenswrapper[4740]: I1014 13:11:51.605923 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerDied","Data":"12d72bb9d4324b183104d8033fbb4b64412be63d92c608ad75fd099e5f63f4a7"} Oct 14 13:11:51.879503 master-1 kubenswrapper[4740]: I1014 13:11:51.879352 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:11:52.440665 master-1 kubenswrapper[4740]: I1014 13:11:52.440545 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:52.441134 master-1 kubenswrapper[4740]: I1014 13:11:52.440660 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: I1014 13:11:54.035933 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:54.036040 master-1 kubenswrapper[4740]: I1014 13:11:54.036098 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: I1014 13:11:55.084189 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:55.084290 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:55.086858 master-1 kubenswrapper[4740]: I1014 13:11:55.084316 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:11:57.440871 master-1 kubenswrapper[4740]: I1014 13:11:57.440772 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:11:57.442841 master-1 kubenswrapper[4740]: I1014 13:11:57.440925 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: I1014 13:11:59.032802 4740 patch_prober.go:28] interesting pod/apiserver-c57444595-zs4m8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:11:59.032888 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:11:59.035199 master-1 kubenswrapper[4740]: I1014 13:11:59.032905 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: I1014 13:12:00.083143 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:00.083277 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:00.085189 master-1 kubenswrapper[4740]: I1014 13:12:00.083498 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:00.670613 master-1 kubenswrapper[4740]: I1014 13:12:00.670342 4740 generic.go:334] "Generic (PLEG): container finished" podID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerID="194ea90143b4d79876e5b96800a908311ed2f6a1f27daf72bfecc0523fd85c7f" exitCode=0 Oct 14 13:12:00.670613 master-1 kubenswrapper[4740]: I1014 13:12:00.670438 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" event={"ID":"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6","Type":"ContainerDied","Data":"194ea90143b4d79876e5b96800a908311ed2f6a1f27daf72bfecc0523fd85c7f"} Oct 14 13:12:00.859377 master-1 kubenswrapper[4740]: I1014 13:12:00.859330 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:12:00.986845 master-1 kubenswrapper[4740]: I1014 13:12:00.986775 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bcf7659b-pckjm"] Oct 14 13:12:00.987327 master-1 kubenswrapper[4740]: E1014 13:12:00.987283 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" podUID="686cb294-f678-4e26-9305-2756573cadb8" Oct 14 13:12:01.008657 master-1 kubenswrapper[4740]: I1014 13:12:01.008569 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5"] Oct 14 13:12:01.010904 master-1 kubenswrapper[4740]: E1014 13:12:01.010839 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" podUID="0a959dc9-9b10-4cb5-b750-bedfa6fff093" Oct 14 13:12:01.036275 master-1 kubenswrapper[4740]: I1014 13:12:01.036115 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-encryption-config\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036275 master-1 kubenswrapper[4740]: I1014 13:12:01.036265 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-dir\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036624 master-1 kubenswrapper[4740]: I1014 13:12:01.036311 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsfpl\" (UniqueName: \"kubernetes.io/projected/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-kube-api-access-jsfpl\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036624 master-1 kubenswrapper[4740]: I1014 13:12:01.036392 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-serving-ca\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036760 master-1 kubenswrapper[4740]: I1014 13:12:01.036693 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-policies\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036825 master-1 kubenswrapper[4740]: I1014 13:12:01.036804 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-serving-cert\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036897 master-1 kubenswrapper[4740]: I1014 13:12:01.036872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-trusted-ca-bundle\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.036968 master-1 kubenswrapper[4740]: I1014 13:12:01.036942 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-client\") pod \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\" (UID: \"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6\") " Oct 14 13:12:01.037476 master-1 kubenswrapper[4740]: I1014 13:12:01.037399 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:01.037627 master-1 kubenswrapper[4740]: I1014 13:12:01.037446 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:12:01.037781 master-1 kubenswrapper[4740]: I1014 13:12:01.037696 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.037781 master-1 kubenswrapper[4740]: I1014 13:12:01.037721 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.038317 master-1 kubenswrapper[4740]: I1014 13:12:01.038272 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:01.038656 master-1 kubenswrapper[4740]: I1014 13:12:01.038586 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:01.041944 master-1 kubenswrapper[4740]: I1014 13:12:01.041839 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:01.042081 master-1 kubenswrapper[4740]: I1014 13:12:01.041957 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-kube-api-access-jsfpl" (OuterVolumeSpecName: "kube-api-access-jsfpl") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "kube-api-access-jsfpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:12:01.042887 master-1 kubenswrapper[4740]: I1014 13:12:01.042837 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:01.043275 master-1 kubenswrapper[4740]: I1014 13:12:01.043201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" (UID: "57cd904e-5dfb-4cc1-8bd8-8adf12b276c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:01.139172 master-1 kubenswrapper[4740]: I1014 13:12:01.139044 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.140003 master-1 kubenswrapper[4740]: I1014 13:12:01.139976 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.140138 master-1 kubenswrapper[4740]: I1014 13:12:01.140117 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.140297 master-1 kubenswrapper[4740]: I1014 13:12:01.140276 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsfpl\" (UniqueName: \"kubernetes.io/projected/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-kube-api-access-jsfpl\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.140413 master-1 kubenswrapper[4740]: I1014 13:12:01.140395 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.140663 master-1 kubenswrapper[4740]: I1014 13:12:01.140642 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.680155 master-1 kubenswrapper[4740]: I1014 13:12:01.680062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" event={"ID":"57cd904e-5dfb-4cc1-8bd8-8adf12b276c6","Type":"ContainerDied","Data":"a7a0890d7ffcce8e3f0c608219d432f3f64f3d0bdbc36db56620e1dfeaa9fe81"} Oct 14 13:12:01.680524 master-1 kubenswrapper[4740]: I1014 13:12:01.680169 4740 scope.go:117] "RemoveContainer" containerID="194ea90143b4d79876e5b96800a908311ed2f6a1f27daf72bfecc0523fd85c7f" Oct 14 13:12:01.680524 master-1 kubenswrapper[4740]: I1014 13:12:01.680179 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-c57444595-zs4m8" Oct 14 13:12:01.680524 master-1 kubenswrapper[4740]: I1014 13:12:01.680089 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:12:01.680915 master-1 kubenswrapper[4740]: I1014 13:12:01.680875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:12:01.693892 master-1 kubenswrapper[4740]: I1014 13:12:01.693511 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:12:01.703719 master-1 kubenswrapper[4740]: I1014 13:12:01.703659 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:12:01.708753 master-1 kubenswrapper[4740]: I1014 13:12:01.708285 4740 scope.go:117] "RemoveContainer" containerID="994a341162264e39b9c97158b4e18868680b0687f0b6a63a8495aa495b95e9e1" Oct 14 13:12:01.744698 master-1 kubenswrapper[4740]: I1014 13:12:01.744588 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-c57444595-zs4m8"] Oct 14 13:12:01.749487 master-1 kubenswrapper[4740]: I1014 13:12:01.749381 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-c57444595-zs4m8"] Oct 14 13:12:01.850844 master-1 kubenswrapper[4740]: I1014 13:12:01.850761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") pod \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " Oct 14 13:12:01.851142 master-1 kubenswrapper[4740]: I1014 13:12:01.850876 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-proxy-ca-bundles\") pod \"686cb294-f678-4e26-9305-2756573cadb8\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " Oct 14 13:12:01.851142 master-1 kubenswrapper[4740]: I1014 13:12:01.850936 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-config\") pod \"686cb294-f678-4e26-9305-2756573cadb8\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " Oct 14 13:12:01.851142 master-1 kubenswrapper[4740]: I1014 13:12:01.851013 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") pod \"686cb294-f678-4e26-9305-2756573cadb8\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " Oct 14 13:12:01.851142 master-1 kubenswrapper[4740]: I1014 13:12:01.851041 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2hvc\" (UniqueName: \"kubernetes.io/projected/686cb294-f678-4e26-9305-2756573cadb8-kube-api-access-s2hvc\") pod \"686cb294-f678-4e26-9305-2756573cadb8\" (UID: \"686cb294-f678-4e26-9305-2756573cadb8\") " Oct 14 13:12:01.851142 master-1 kubenswrapper[4740]: I1014 13:12:01.851070 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-config\") pod \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " Oct 14 13:12:01.851142 master-1 kubenswrapper[4740]: I1014 13:12:01.851127 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4c8\" (UniqueName: \"kubernetes.io/projected/0a959dc9-9b10-4cb5-b750-bedfa6fff093-kube-api-access-6m4c8\") pod \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\" (UID: \"0a959dc9-9b10-4cb5-b750-bedfa6fff093\") " Oct 14 13:12:01.851966 master-1 kubenswrapper[4740]: I1014 13:12:01.851889 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "686cb294-f678-4e26-9305-2756573cadb8" (UID: "686cb294-f678-4e26-9305-2756573cadb8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:01.852033 master-1 kubenswrapper[4740]: I1014 13:12:01.851905 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-config" (OuterVolumeSpecName: "config") pod "686cb294-f678-4e26-9305-2756573cadb8" (UID: "686cb294-f678-4e26-9305-2756573cadb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:01.852142 master-1 kubenswrapper[4740]: I1014 13:12:01.852075 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-config" (OuterVolumeSpecName: "config") pod "0a959dc9-9b10-4cb5-b750-bedfa6fff093" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:01.857048 master-1 kubenswrapper[4740]: I1014 13:12:01.856980 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "686cb294-f678-4e26-9305-2756573cadb8" (UID: "686cb294-f678-4e26-9305-2756573cadb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:01.857213 master-1 kubenswrapper[4740]: I1014 13:12:01.857170 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a959dc9-9b10-4cb5-b750-bedfa6fff093-kube-api-access-6m4c8" (OuterVolumeSpecName: "kube-api-access-6m4c8") pod "0a959dc9-9b10-4cb5-b750-bedfa6fff093" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093"). InnerVolumeSpecName "kube-api-access-6m4c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:12:01.857784 master-1 kubenswrapper[4740]: I1014 13:12:01.857745 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686cb294-f678-4e26-9305-2756573cadb8-kube-api-access-s2hvc" (OuterVolumeSpecName: "kube-api-access-s2hvc") pod "686cb294-f678-4e26-9305-2756573cadb8" (UID: "686cb294-f678-4e26-9305-2756573cadb8"). InnerVolumeSpecName "kube-api-access-s2hvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:12:01.858404 master-1 kubenswrapper[4740]: I1014 13:12:01.858327 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a959dc9-9b10-4cb5-b750-bedfa6fff093" (UID: "0a959dc9-9b10-4cb5-b750-bedfa6fff093"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953182 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953510 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686cb294-f678-4e26-9305-2756573cadb8-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953533 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2hvc\" (UniqueName: \"kubernetes.io/projected/686cb294-f678-4e26-9305-2756573cadb8-kube-api-access-s2hvc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953552 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m4c8\" (UniqueName: \"kubernetes.io/projected/0a959dc9-9b10-4cb5-b750-bedfa6fff093-kube-api-access-6m4c8\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953573 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a959dc9-9b10-4cb5-b750-bedfa6fff093-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953590 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:01.953606 master-1 kubenswrapper[4740]: I1014 13:12:01.953607 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:02.440747 master-1 kubenswrapper[4740]: I1014 13:12:02.440659 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:12:02.441452 master-1 kubenswrapper[4740]: I1014 13:12:02.440775 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:12:02.684691 master-1 kubenswrapper[4740]: I1014 13:12:02.684596 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-96c4c446c-brl6n"] Oct 14 13:12:02.685411 master-1 kubenswrapper[4740]: E1014 13:12:02.685013 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" Oct 14 13:12:02.685411 master-1 kubenswrapper[4740]: I1014 13:12:02.685044 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" Oct 14 13:12:02.685411 master-1 kubenswrapper[4740]: E1014 13:12:02.685092 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="fix-audit-permissions" Oct 14 13:12:02.685411 master-1 kubenswrapper[4740]: I1014 13:12:02.685109 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="fix-audit-permissions" Oct 14 13:12:02.690901 master-1 kubenswrapper[4740]: I1014 13:12:02.685509 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" containerName="oauth-apiserver" Oct 14 13:12:02.690901 master-1 kubenswrapper[4740]: I1014 13:12:02.687844 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.691199 master-1 kubenswrapper[4740]: I1014 13:12:02.691064 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 13:12:02.691920 master-1 kubenswrapper[4740]: I1014 13:12:02.691511 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 13:12:02.691920 master-1 kubenswrapper[4740]: I1014 13:12:02.691899 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 13:12:02.692797 master-1 kubenswrapper[4740]: I1014 13:12:02.692760 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 13:12:02.693430 master-1 kubenswrapper[4740]: I1014 13:12:02.692764 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 13:12:02.695055 master-1 kubenswrapper[4740]: I1014 13:12:02.694926 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 13:12:02.695378 master-1 kubenswrapper[4740]: I1014 13:12:02.695043 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 13:12:02.695378 master-1 kubenswrapper[4740]: I1014 13:12:02.695181 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 13:12:02.700438 master-1 kubenswrapper[4740]: I1014 13:12:02.697324 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bcf7659b-pckjm" Oct 14 13:12:02.700438 master-1 kubenswrapper[4740]: I1014 13:12:02.697462 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5" Oct 14 13:12:02.702437 master-1 kubenswrapper[4740]: I1014 13:12:02.702373 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-96c4c446c-brl6n"] Oct 14 13:12:02.765861 master-1 kubenswrapper[4740]: I1014 13:12:02.765214 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bcf7659b-pckjm"] Oct 14 13:12:02.778281 master-1 kubenswrapper[4740]: I1014 13:12:02.778202 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bcf7659b-pckjm"] Oct 14 13:12:02.813483 master-1 kubenswrapper[4740]: I1014 13:12:02.813391 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5"] Oct 14 13:12:02.816561 master-1 kubenswrapper[4740]: I1014 13:12:02.816502 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5"] Oct 14 13:12:02.865432 master-1 kubenswrapper[4740]: I1014 13:12:02.865319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-policies\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.865942 master-1 kubenswrapper[4740]: I1014 13:12:02.865885 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-serving-ca\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866043 master-1 kubenswrapper[4740]: I1014 13:12:02.865941 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-client\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866113 master-1 kubenswrapper[4740]: I1014 13:12:02.866043 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn88j\" (UniqueName: \"kubernetes.io/projected/ebfb9d2f-6716-4abe-b781-0d9632f00498-kube-api-access-sn88j\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866113 master-1 kubenswrapper[4740]: I1014 13:12:02.866092 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-trusted-ca-bundle\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866317 master-1 kubenswrapper[4740]: I1014 13:12:02.866192 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-serving-cert\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866317 master-1 kubenswrapper[4740]: I1014 13:12:02.866256 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-encryption-config\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866317 master-1 kubenswrapper[4740]: I1014 13:12:02.866284 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-dir\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.866528 master-1 kubenswrapper[4740]: I1014 13:12:02.866334 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686cb294-f678-4e26-9305-2756573cadb8-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:02.866528 master-1 kubenswrapper[4740]: I1014 13:12:02.866359 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a959dc9-9b10-4cb5-b750-bedfa6fff093-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:02.953741 master-1 kubenswrapper[4740]: I1014 13:12:02.953579 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a959dc9-9b10-4cb5-b750-bedfa6fff093" path="/var/lib/kubelet/pods/0a959dc9-9b10-4cb5-b750-bedfa6fff093/volumes" Oct 14 13:12:02.954513 master-1 kubenswrapper[4740]: I1014 13:12:02.954367 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57cd904e-5dfb-4cc1-8bd8-8adf12b276c6" path="/var/lib/kubelet/pods/57cd904e-5dfb-4cc1-8bd8-8adf12b276c6/volumes" Oct 14 13:12:02.955544 master-1 kubenswrapper[4740]: I1014 13:12:02.955499 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686cb294-f678-4e26-9305-2756573cadb8" path="/var/lib/kubelet/pods/686cb294-f678-4e26-9305-2756573cadb8/volumes" Oct 14 13:12:02.967420 master-1 kubenswrapper[4740]: I1014 13:12:02.967366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-serving-cert\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967508 master-1 kubenswrapper[4740]: I1014 13:12:02.967424 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-encryption-config\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967508 master-1 kubenswrapper[4740]: I1014 13:12:02.967472 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-dir\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967651 master-1 kubenswrapper[4740]: I1014 13:12:02.967575 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-policies\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967651 master-1 kubenswrapper[4740]: I1014 13:12:02.967638 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-serving-ca\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967748 master-1 kubenswrapper[4740]: I1014 13:12:02.967688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-client\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967792 master-1 kubenswrapper[4740]: I1014 13:12:02.967748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-dir\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.967915 master-1 kubenswrapper[4740]: I1014 13:12:02.967869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn88j\" (UniqueName: \"kubernetes.io/projected/ebfb9d2f-6716-4abe-b781-0d9632f00498-kube-api-access-sn88j\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.968773 master-1 kubenswrapper[4740]: I1014 13:12:02.967989 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-trusted-ca-bundle\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.969275 master-1 kubenswrapper[4740]: I1014 13:12:02.969154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-policies\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.969585 master-1 kubenswrapper[4740]: I1014 13:12:02.969331 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-serving-ca\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.969585 master-1 kubenswrapper[4740]: I1014 13:12:02.969526 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-trusted-ca-bundle\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.971615 master-1 kubenswrapper[4740]: I1014 13:12:02.971552 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-encryption-config\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.971908 master-1 kubenswrapper[4740]: I1014 13:12:02.971859 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-serving-cert\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.973578 master-1 kubenswrapper[4740]: I1014 13:12:02.973533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-client\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:02.993066 master-1 kubenswrapper[4740]: I1014 13:12:02.992995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn88j\" (UniqueName: \"kubernetes.io/projected/ebfb9d2f-6716-4abe-b781-0d9632f00498-kube-api-access-sn88j\") pod \"apiserver-96c4c446c-brl6n\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:03.025408 master-1 kubenswrapper[4740]: I1014 13:12:03.024673 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:03.278322 master-1 kubenswrapper[4740]: I1014 13:12:03.278070 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-96c4c446c-brl6n"] Oct 14 13:12:03.704939 master-1 kubenswrapper[4740]: I1014 13:12:03.704896 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/wait-for-host-port/0.log" Oct 14 13:12:03.705820 master-1 kubenswrapper[4740]: I1014 13:12:03.704951 4740 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="8cf8d336358e5e89ddb3d21d4fac5892909c3f2b88f04a63d122268437bd6a7a" exitCode=0 Oct 14 13:12:03.705820 master-1 kubenswrapper[4740]: I1014 13:12:03.705020 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerDied","Data":"8cf8d336358e5e89ddb3d21d4fac5892909c3f2b88f04a63d122268437bd6a7a"} Oct 14 13:12:03.705820 master-1 kubenswrapper[4740]: I1014 13:12:03.705062 4740 scope.go:117] "RemoveContainer" containerID="c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03" Oct 14 13:12:03.705820 master-1 kubenswrapper[4740]: I1014 13:12:03.705516 4740 scope.go:117] "RemoveContainer" containerID="c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03" Oct 14 13:12:03.708609 master-1 kubenswrapper[4740]: I1014 13:12:03.708565 4740 generic.go:334] "Generic (PLEG): container finished" podID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerID="68ebc7959133a6009d0461f663d3d8332f3db7cc21e6013363b08f4d56e8d065" exitCode=0 Oct 14 13:12:03.708740 master-1 kubenswrapper[4740]: I1014 13:12:03.708612 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" event={"ID":"ebfb9d2f-6716-4abe-b781-0d9632f00498","Type":"ContainerDied","Data":"68ebc7959133a6009d0461f663d3d8332f3db7cc21e6013363b08f4d56e8d065"} Oct 14 13:12:03.708740 master-1 kubenswrapper[4740]: I1014 13:12:03.708645 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" event={"ID":"ebfb9d2f-6716-4abe-b781-0d9632f00498","Type":"ContainerStarted","Data":"eee0ee6b25d6d7e91442bd6108b3db1c9b1e388a31d368ca7c194a15ba4cdb5f"} Oct 14 13:12:03.752436 master-1 kubenswrapper[4740]: E1014 13:12:03.752371 4740 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-master-1_openshift-kube-scheduler_a61df698d34d049669621b2249bfe758_0 in pod sandbox 1cf141079f9748454ec19ec0db69cd859eba31f8cbfe7a61434ebcb0f25e4ba5 from index: no such id: 'c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03'" containerID="c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03" Oct 14 13:12:03.752937 master-1 kubenswrapper[4740]: E1014 13:12:03.752446 4740 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"wait-for-host-port\": rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-master-1_openshift-kube-scheduler_a61df698d34d049669621b2249bfe758_0 in pod sandbox 1cf141079f9748454ec19ec0db69cd859eba31f8cbfe7a61434ebcb0f25e4ba5 from index: no such id: 'c2edd5650de1eeda4d4bdf9b55be316aab661693b7d21be3ebb3d5914e975a03'; Skipping pod \"openshift-kube-scheduler-master-1_openshift-kube-scheduler(a61df698d34d049669621b2249bfe758)\"" logger="UnhandledError" Oct 14 13:12:04.532013 master-1 kubenswrapper[4740]: I1014 13:12:04.531898 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv"] Oct 14 13:12:04.532965 master-1 kubenswrapper[4740]: I1014 13:12:04.532924 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.535377 master-1 kubenswrapper[4740]: I1014 13:12:04.535310 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86659fd8d-zhj4d"] Oct 14 13:12:04.536039 master-1 kubenswrapper[4740]: I1014 13:12:04.536002 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.536302 master-1 kubenswrapper[4740]: I1014 13:12:04.536266 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 13:12:04.536824 master-1 kubenswrapper[4740]: I1014 13:12:04.536795 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 13:12:04.537040 master-1 kubenswrapper[4740]: I1014 13:12:04.537012 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 13:12:04.537412 master-1 kubenswrapper[4740]: I1014 13:12:04.537355 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 13:12:04.537573 master-1 kubenswrapper[4740]: I1014 13:12:04.537512 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 13:12:04.543710 master-1 kubenswrapper[4740]: I1014 13:12:04.543646 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 13:12:04.545724 master-1 kubenswrapper[4740]: I1014 13:12:04.545655 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86659fd8d-zhj4d"] Oct 14 13:12:04.546063 master-1 kubenswrapper[4740]: I1014 13:12:04.546006 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 13:12:04.547729 master-1 kubenswrapper[4740]: I1014 13:12:04.547673 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 13:12:04.547982 master-1 kubenswrapper[4740]: I1014 13:12:04.547885 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 13:12:04.548196 master-1 kubenswrapper[4740]: I1014 13:12:04.548065 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 13:12:04.548335 master-1 kubenswrapper[4740]: I1014 13:12:04.548222 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:12:04.550478 master-1 kubenswrapper[4740]: I1014 13:12:04.550428 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv"] Oct 14 13:12:04.590985 master-1 kubenswrapper[4740]: I1014 13:12:04.590905 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-config\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.590985 master-1 kubenswrapper[4740]: I1014 13:12:04.590963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjrqm\" (UniqueName: \"kubernetes.io/projected/cb936031-86ec-491d-a5cd-860b0b04f3e8-kube-api-access-zjrqm\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-client-ca\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-proxy-ca-bundles\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591052 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-config\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skwd\" (UniqueName: \"kubernetes.io/projected/e4c8f12e-4b62-49eb-a466-af75a571c62f-kube-api-access-4skwd\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-client-ca\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c8f12e-4b62-49eb-a466-af75a571c62f-serving-cert\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.591296 master-1 kubenswrapper[4740]: I1014 13:12:04.591130 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb936031-86ec-491d-a5cd-860b0b04f3e8-serving-cert\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.692097 master-1 kubenswrapper[4740]: I1014 13:12:04.692015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-client-ca\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.692097 master-1 kubenswrapper[4740]: I1014 13:12:04.692089 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-proxy-ca-bundles\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692145 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-config\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692173 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4skwd\" (UniqueName: \"kubernetes.io/projected/e4c8f12e-4b62-49eb-a466-af75a571c62f-kube-api-access-4skwd\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-client-ca\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c8f12e-4b62-49eb-a466-af75a571c62f-serving-cert\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb936031-86ec-491d-a5cd-860b0b04f3e8-serving-cert\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692320 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-config\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.692581 master-1 kubenswrapper[4740]: I1014 13:12:04.692347 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjrqm\" (UniqueName: \"kubernetes.io/projected/cb936031-86ec-491d-a5cd-860b0b04f3e8-kube-api-access-zjrqm\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.693937 master-1 kubenswrapper[4740]: I1014 13:12:04.693889 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-client-ca\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.694018 master-1 kubenswrapper[4740]: I1014 13:12:04.693971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-proxy-ca-bundles\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.694104 master-1 kubenswrapper[4740]: I1014 13:12:04.694055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-client-ca\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.695104 master-1 kubenswrapper[4740]: I1014 13:12:04.695030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-config\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.696056 master-1 kubenswrapper[4740]: I1014 13:12:04.695997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-config\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.697601 master-1 kubenswrapper[4740]: I1014 13:12:04.697560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c8f12e-4b62-49eb-a466-af75a571c62f-serving-cert\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.707310 master-1 kubenswrapper[4740]: I1014 13:12:04.707187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb936031-86ec-491d-a5cd-860b0b04f3e8-serving-cert\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.713144 master-1 kubenswrapper[4740]: I1014 13:12:04.713106 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjrqm\" (UniqueName: \"kubernetes.io/projected/cb936031-86ec-491d-a5cd-860b0b04f3e8-kube-api-access-zjrqm\") pod \"controller-manager-86659fd8d-zhj4d\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:04.717925 master-1 kubenswrapper[4740]: I1014 13:12:04.717729 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"c237848c47768b8806a19f783f2d47f481ae5a551fb55ae77977077026c61294"} Oct 14 13:12:04.717925 master-1 kubenswrapper[4740]: I1014 13:12:04.717775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"6fc564eebe0d572c7e176e3aca3156a0fc412212ac1fc3f10e1293f2dcc05d04"} Oct 14 13:12:04.717925 master-1 kubenswrapper[4740]: I1014 13:12:04.717787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"a61df698d34d049669621b2249bfe758","Type":"ContainerStarted","Data":"7ed5379248b9c8e16850c8587a413da8fce2a5280c56803e5377b6801674d1a9"} Oct 14 13:12:04.718114 master-1 kubenswrapper[4740]: I1014 13:12:04.718070 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:12:04.720031 master-1 kubenswrapper[4740]: I1014 13:12:04.719964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" event={"ID":"ebfb9d2f-6716-4abe-b781-0d9632f00498","Type":"ContainerStarted","Data":"c5cd1b05a00ba84888e4a60b94053728d4fbb75e95c5e2e3f17dac5202720621"} Oct 14 13:12:04.725722 master-1 kubenswrapper[4740]: I1014 13:12:04.725674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4skwd\" (UniqueName: \"kubernetes.io/projected/e4c8f12e-4b62-49eb-a466-af75a571c62f-kube-api-access-4skwd\") pod \"route-controller-manager-77674cffc8-k5fvv\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.742102 master-1 kubenswrapper[4740]: I1014 13:12:04.742014 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podStartSLOduration=49.741996625 podStartE2EDuration="49.741996625s" podCreationTimestamp="2025-10-14 13:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:12:04.737932883 +0000 UTC m=+350.548222222" watchObservedRunningTime="2025-10-14 13:12:04.741996625 +0000 UTC m=+350.552285954" Oct 14 13:12:04.859365 master-1 kubenswrapper[4740]: I1014 13:12:04.859270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:04.875549 master-1 kubenswrapper[4740]: I1014 13:12:04.875474 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: I1014 13:12:05.081498 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: I1014 13:12:05.081552 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:05.082429 master-1 kubenswrapper[4740]: I1014 13:12:05.081638 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:12:05.117887 master-1 kubenswrapper[4740]: I1014 13:12:05.116709 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podStartSLOduration=56.116679403 podStartE2EDuration="56.116679403s" podCreationTimestamp="2025-10-14 13:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:12:04.768245376 +0000 UTC m=+350.578534715" watchObservedRunningTime="2025-10-14 13:12:05.116679403 +0000 UTC m=+350.926968732" Oct 14 13:12:05.327332 master-1 kubenswrapper[4740]: I1014 13:12:05.327263 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86659fd8d-zhj4d"] Oct 14 13:12:05.335462 master-1 kubenswrapper[4740]: W1014 13:12:05.335389 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb936031_86ec_491d_a5cd_860b0b04f3e8.slice/crio-fc778d35193100bae4276b9b4f29c889eadfe32c10208a396986c7ff43bc2532 WatchSource:0}: Error finding container fc778d35193100bae4276b9b4f29c889eadfe32c10208a396986c7ff43bc2532: Status 404 returned error can't find the container with id fc778d35193100bae4276b9b4f29c889eadfe32c10208a396986c7ff43bc2532 Oct 14 13:12:05.395067 master-1 kubenswrapper[4740]: I1014 13:12:05.394675 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv"] Oct 14 13:12:05.402203 master-1 kubenswrapper[4740]: W1014 13:12:05.402147 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4c8f12e_4b62_49eb_a466_af75a571c62f.slice/crio-d97eb34a8632f0701dd952586765db3961305b34f75564be0070e3773d6d0ebe WatchSource:0}: Error finding container d97eb34a8632f0701dd952586765db3961305b34f75564be0070e3773d6d0ebe: Status 404 returned error can't find the container with id d97eb34a8632f0701dd952586765db3961305b34f75564be0070e3773d6d0ebe Oct 14 13:12:05.734415 master-1 kubenswrapper[4740]: I1014 13:12:05.734369 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerStarted","Data":"d97eb34a8632f0701dd952586765db3961305b34f75564be0070e3773d6d0ebe"} Oct 14 13:12:05.735622 master-1 kubenswrapper[4740]: I1014 13:12:05.735568 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" event={"ID":"cb936031-86ec-491d-a5cd-860b0b04f3e8","Type":"ContainerStarted","Data":"fc778d35193100bae4276b9b4f29c889eadfe32c10208a396986c7ff43bc2532"} Oct 14 13:12:07.447729 master-1 kubenswrapper[4740]: I1014 13:12:07.447676 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:12:08.025219 master-1 kubenswrapper[4740]: I1014 13:12:08.025126 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:08.026079 master-1 kubenswrapper[4740]: I1014 13:12:08.026042 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:08.035534 master-1 kubenswrapper[4740]: I1014 13:12:08.035491 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:08.745294 master-1 kubenswrapper[4740]: I1014 13:12:08.745081 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86659fd8d-zhj4d"] Oct 14 13:12:08.756735 master-1 kubenswrapper[4740]: I1014 13:12:08.756668 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerStarted","Data":"ffd4998245ebc17a6f03025aacb5ec867c7637eefba8864af77e8d4e546113b1"} Oct 14 13:12:08.758936 master-1 kubenswrapper[4740]: I1014 13:12:08.758879 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" event={"ID":"cb936031-86ec-491d-a5cd-860b0b04f3e8","Type":"ContainerStarted","Data":"f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437"} Oct 14 13:12:08.770624 master-1 kubenswrapper[4740]: I1014 13:12:08.770573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:08.791141 master-1 kubenswrapper[4740]: I1014 13:12:08.790985 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" podStartSLOduration=4.585705538 podStartE2EDuration="7.790946897s" podCreationTimestamp="2025-10-14 13:12:01 +0000 UTC" firstStartedPulling="2025-10-14 13:12:05.339385675 +0000 UTC m=+351.149675004" lastFinishedPulling="2025-10-14 13:12:08.544627004 +0000 UTC m=+354.354916363" observedRunningTime="2025-10-14 13:12:08.787656839 +0000 UTC m=+354.597946168" watchObservedRunningTime="2025-10-14 13:12:08.790946897 +0000 UTC m=+354.601236226" Oct 14 13:12:09.765994 master-1 kubenswrapper[4740]: I1014 13:12:09.765140 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:09.765994 master-1 kubenswrapper[4740]: I1014 13:12:09.765223 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:09.765994 master-1 kubenswrapper[4740]: I1014 13:12:09.765300 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" podUID="cb936031-86ec-491d-a5cd-860b0b04f3e8" containerName="controller-manager" containerID="cri-o://f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437" gracePeriod=30 Oct 14 13:12:09.773408 master-1 kubenswrapper[4740]: I1014 13:12:09.773325 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:12:09.774861 master-1 kubenswrapper[4740]: I1014 13:12:09.774794 4740 patch_prober.go:28] interesting pod/controller-manager-86659fd8d-zhj4d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.69:8443/healthz\": read tcp 10.128.0.2:49768->10.128.0.69:8443: read: connection reset by peer" start-of-body= Oct 14 13:12:09.774971 master-1 kubenswrapper[4740]: I1014 13:12:09.774879 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" podUID="cb936031-86ec-491d-a5cd-860b0b04f3e8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.69:8443/healthz\": read tcp 10.128.0.2:49768->10.128.0.69:8443: read: connection reset by peer" Oct 14 13:12:09.790290 master-1 kubenswrapper[4740]: I1014 13:12:09.784996 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podStartSLOduration=5.645917618 podStartE2EDuration="8.784970015s" podCreationTimestamp="2025-10-14 13:12:01 +0000 UTC" firstStartedPulling="2025-10-14 13:12:05.406026131 +0000 UTC m=+351.216315480" lastFinishedPulling="2025-10-14 13:12:08.545078508 +0000 UTC m=+354.355367877" observedRunningTime="2025-10-14 13:12:09.78180264 +0000 UTC m=+355.592091999" watchObservedRunningTime="2025-10-14 13:12:09.784970015 +0000 UTC m=+355.595259374" Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: I1014 13:12:10.082762 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:10.083187 master-1 kubenswrapper[4740]: I1014 13:12:10.082832 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:10.099071 master-1 kubenswrapper[4740]: I1014 13:12:10.099020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:12:10.207797 master-1 kubenswrapper[4740]: I1014 13:12:10.207020 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:10.243803 master-1 kubenswrapper[4740]: I1014 13:12:10.243740 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56cfb99cfd-9798f"] Oct 14 13:12:10.244826 master-1 kubenswrapper[4740]: E1014 13:12:10.244789 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb936031-86ec-491d-a5cd-860b0b04f3e8" containerName="controller-manager" Oct 14 13:12:10.244826 master-1 kubenswrapper[4740]: I1014 13:12:10.244817 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb936031-86ec-491d-a5cd-860b0b04f3e8" containerName="controller-manager" Oct 14 13:12:10.245323 master-1 kubenswrapper[4740]: I1014 13:12:10.244940 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb936031-86ec-491d-a5cd-860b0b04f3e8" containerName="controller-manager" Oct 14 13:12:10.245433 master-1 kubenswrapper[4740]: I1014 13:12:10.245408 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.258474 master-1 kubenswrapper[4740]: I1014 13:12:10.258393 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56cfb99cfd-9798f"] Oct 14 13:12:10.280483 master-1 kubenswrapper[4740]: I1014 13:12:10.280373 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjrqm\" (UniqueName: \"kubernetes.io/projected/cb936031-86ec-491d-a5cd-860b0b04f3e8-kube-api-access-zjrqm\") pod \"cb936031-86ec-491d-a5cd-860b0b04f3e8\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " Oct 14 13:12:10.280483 master-1 kubenswrapper[4740]: I1014 13:12:10.280446 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-client-ca\") pod \"cb936031-86ec-491d-a5cd-860b0b04f3e8\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " Oct 14 13:12:10.280483 master-1 kubenswrapper[4740]: I1014 13:12:10.280508 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-proxy-ca-bundles\") pod \"cb936031-86ec-491d-a5cd-860b0b04f3e8\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.280549 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-config\") pod \"cb936031-86ec-491d-a5cd-860b0b04f3e8\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.280632 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb936031-86ec-491d-a5cd-860b0b04f3e8-serving-cert\") pod \"cb936031-86ec-491d-a5cd-860b0b04f3e8\" (UID: \"cb936031-86ec-491d-a5cd-860b0b04f3e8\") " Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.280980 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-proxy-ca-bundles\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281019 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jzj6\" (UniqueName: \"kubernetes.io/projected/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-kube-api-access-4jzj6\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-client-ca\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281115 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-config\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281203 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-serving-cert\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281816 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cb936031-86ec-491d-a5cd-860b0b04f3e8" (UID: "cb936031-86ec-491d-a5cd-860b0b04f3e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281841 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-config" (OuterVolumeSpecName: "config") pod "cb936031-86ec-491d-a5cd-860b0b04f3e8" (UID: "cb936031-86ec-491d-a5cd-860b0b04f3e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:10.282070 master-1 kubenswrapper[4740]: I1014 13:12:10.281862 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb936031-86ec-491d-a5cd-860b0b04f3e8" (UID: "cb936031-86ec-491d-a5cd-860b0b04f3e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:10.286454 master-1 kubenswrapper[4740]: I1014 13:12:10.286375 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb936031-86ec-491d-a5cd-860b0b04f3e8-kube-api-access-zjrqm" (OuterVolumeSpecName: "kube-api-access-zjrqm") pod "cb936031-86ec-491d-a5cd-860b0b04f3e8" (UID: "cb936031-86ec-491d-a5cd-860b0b04f3e8"). InnerVolumeSpecName "kube-api-access-zjrqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:12:10.286454 master-1 kubenswrapper[4740]: I1014 13:12:10.286420 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb936031-86ec-491d-a5cd-860b0b04f3e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb936031-86ec-491d-a5cd-860b0b04f3e8" (UID: "cb936031-86ec-491d-a5cd-860b0b04f3e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:10.383163 master-1 kubenswrapper[4740]: I1014 13:12:10.382970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-proxy-ca-bundles\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.383163 master-1 kubenswrapper[4740]: I1014 13:12:10.383031 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jzj6\" (UniqueName: \"kubernetes.io/projected/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-kube-api-access-4jzj6\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.383163 master-1 kubenswrapper[4740]: I1014 13:12:10.383071 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-client-ca\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.383163 master-1 kubenswrapper[4740]: I1014 13:12:10.383090 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-config\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.383752 master-1 kubenswrapper[4740]: I1014 13:12:10.383266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-serving-cert\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.383752 master-1 kubenswrapper[4740]: I1014 13:12:10.383316 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb936031-86ec-491d-a5cd-860b0b04f3e8-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:10.383752 master-1 kubenswrapper[4740]: I1014 13:12:10.383328 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjrqm\" (UniqueName: \"kubernetes.io/projected/cb936031-86ec-491d-a5cd-860b0b04f3e8-kube-api-access-zjrqm\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:10.383752 master-1 kubenswrapper[4740]: I1014 13:12:10.383339 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:10.383752 master-1 kubenswrapper[4740]: I1014 13:12:10.383368 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:10.383752 master-1 kubenswrapper[4740]: I1014 13:12:10.383379 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb936031-86ec-491d-a5cd-860b0b04f3e8-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:10.384804 master-1 kubenswrapper[4740]: I1014 13:12:10.384763 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-client-ca\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.385120 master-1 kubenswrapper[4740]: I1014 13:12:10.385087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-config\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.385332 master-1 kubenswrapper[4740]: I1014 13:12:10.385280 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-proxy-ca-bundles\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.388395 master-1 kubenswrapper[4740]: I1014 13:12:10.388337 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-serving-cert\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.402804 master-1 kubenswrapper[4740]: I1014 13:12:10.402737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jzj6\" (UniqueName: \"kubernetes.io/projected/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-kube-api-access-4jzj6\") pod \"controller-manager-56cfb99cfd-9798f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.570641 master-1 kubenswrapper[4740]: I1014 13:12:10.570418 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:10.774965 master-1 kubenswrapper[4740]: I1014 13:12:10.774874 4740 generic.go:334] "Generic (PLEG): container finished" podID="cb936031-86ec-491d-a5cd-860b0b04f3e8" containerID="f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437" exitCode=0 Oct 14 13:12:10.774965 master-1 kubenswrapper[4740]: I1014 13:12:10.774938 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" Oct 14 13:12:10.775540 master-1 kubenswrapper[4740]: I1014 13:12:10.774976 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" event={"ID":"cb936031-86ec-491d-a5cd-860b0b04f3e8","Type":"ContainerDied","Data":"f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437"} Oct 14 13:12:10.775540 master-1 kubenswrapper[4740]: I1014 13:12:10.775011 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86659fd8d-zhj4d" event={"ID":"cb936031-86ec-491d-a5cd-860b0b04f3e8","Type":"ContainerDied","Data":"fc778d35193100bae4276b9b4f29c889eadfe32c10208a396986c7ff43bc2532"} Oct 14 13:12:10.775540 master-1 kubenswrapper[4740]: I1014 13:12:10.775037 4740 scope.go:117] "RemoveContainer" containerID="f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437" Oct 14 13:12:10.802186 master-1 kubenswrapper[4740]: I1014 13:12:10.802155 4740 scope.go:117] "RemoveContainer" containerID="f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437" Oct 14 13:12:10.803066 master-1 kubenswrapper[4740]: E1014 13:12:10.802990 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437\": container with ID starting with f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437 not found: ID does not exist" containerID="f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437" Oct 14 13:12:10.803305 master-1 kubenswrapper[4740]: I1014 13:12:10.803085 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437"} err="failed to get container status \"f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437\": rpc error: code = NotFound desc = could not find container \"f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437\": container with ID starting with f6452680a552545cc8083963d1534062577fa4e76cc2eee45d7f987b662f4437 not found: ID does not exist" Oct 14 13:12:10.825819 master-1 kubenswrapper[4740]: I1014 13:12:10.825518 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86659fd8d-zhj4d"] Oct 14 13:12:10.829507 master-1 kubenswrapper[4740]: I1014 13:12:10.829456 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86659fd8d-zhj4d"] Oct 14 13:12:10.956352 master-1 kubenswrapper[4740]: I1014 13:12:10.956192 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb936031-86ec-491d-a5cd-860b0b04f3e8" path="/var/lib/kubelet/pods/cb936031-86ec-491d-a5cd-860b0b04f3e8/volumes" Oct 14 13:12:11.035669 master-1 kubenswrapper[4740]: I1014 13:12:11.035614 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56cfb99cfd-9798f"] Oct 14 13:12:11.788861 master-1 kubenswrapper[4740]: I1014 13:12:11.788788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" event={"ID":"95ae2a7e-b760-4dc0-8b0e-adb39439db3f","Type":"ContainerStarted","Data":"7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91"} Oct 14 13:12:11.788861 master-1 kubenswrapper[4740]: I1014 13:12:11.788864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" event={"ID":"95ae2a7e-b760-4dc0-8b0e-adb39439db3f","Type":"ContainerStarted","Data":"2ee1320fddad365b7df09b4f4ca57138aaa99fa2f79fb6cec87285ae6b280ee5"} Oct 14 13:12:11.816923 master-1 kubenswrapper[4740]: I1014 13:12:11.816809 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" podStartSLOduration=3.816778216 podStartE2EDuration="3.816778216s" podCreationTimestamp="2025-10-14 13:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:12:11.814447947 +0000 UTC m=+357.624737296" watchObservedRunningTime="2025-10-14 13:12:11.816778216 +0000 UTC m=+357.627067545" Oct 14 13:12:12.799592 master-1 kubenswrapper[4740]: I1014 13:12:12.799497 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:12.808324 master-1 kubenswrapper[4740]: I1014 13:12:12.808197 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: I1014 13:12:15.087448 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:15.087547 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:15.089984 master-1 kubenswrapper[4740]: I1014 13:12:15.087544 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:16.833695 master-1 kubenswrapper[4740]: I1014 13:12:16.833592 4740 generic.go:334] "Generic (PLEG): container finished" podID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerID="57f4d6aac1f3c80fb4d6e8a8343432ff9667911716e629d1c9aa8b443a819f98" exitCode=0 Oct 14 13:12:16.833695 master-1 kubenswrapper[4740]: I1014 13:12:16.833658 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerDied","Data":"57f4d6aac1f3c80fb4d6e8a8343432ff9667911716e629d1c9aa8b443a819f98"} Oct 14 13:12:16.833695 master-1 kubenswrapper[4740]: I1014 13:12:16.833701 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerStarted","Data":"f8c9d5de8cdc8e09521c2a264d3a5c111dd776eb29cce79eace0db63652de74f"} Oct 14 13:12:17.768897 master-1 kubenswrapper[4740]: I1014 13:12:17.768808 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:12:17.772278 master-1 kubenswrapper[4740]: I1014 13:12:17.772183 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:17.772278 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:17.772278 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:17.772278 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:17.773060 master-1 kubenswrapper[4740]: I1014 13:12:17.772314 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:18.771490 master-1 kubenswrapper[4740]: I1014 13:12:18.771366 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:18.771490 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:18.771490 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:18.771490 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:18.771490 master-1 kubenswrapper[4740]: I1014 13:12:18.771484 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:19.770673 master-1 kubenswrapper[4740]: I1014 13:12:19.770585 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:19.770673 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:19.770673 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:19.770673 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:19.771186 master-1 kubenswrapper[4740]: I1014 13:12:19.770714 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: I1014 13:12:20.084083 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:20.084348 master-1 kubenswrapper[4740]: I1014 13:12:20.084182 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:20.798046 master-1 kubenswrapper[4740]: I1014 13:12:20.797942 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:20.798046 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:20.798046 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:20.798046 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:20.798900 master-1 kubenswrapper[4740]: I1014 13:12:20.798062 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:21.771340 master-1 kubenswrapper[4740]: I1014 13:12:21.771260 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:21.771340 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:21.771340 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:21.771340 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:21.771995 master-1 kubenswrapper[4740]: I1014 13:12:21.771345 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:22.771583 master-1 kubenswrapper[4740]: I1014 13:12:22.771468 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:22.771583 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:22.771583 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:22.771583 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:22.771583 master-1 kubenswrapper[4740]: I1014 13:12:22.771574 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:23.768552 master-1 kubenswrapper[4740]: I1014 13:12:23.768456 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:12:23.771683 master-1 kubenswrapper[4740]: I1014 13:12:23.771567 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:23.771683 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:23.771683 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:23.771683 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:23.771683 master-1 kubenswrapper[4740]: I1014 13:12:23.771672 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:24.771679 master-1 kubenswrapper[4740]: I1014 13:12:24.771574 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:24.771679 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:24.771679 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:24.771679 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:24.771679 master-1 kubenswrapper[4740]: I1014 13:12:24.771673 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: I1014 13:12:25.084343 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:25.084574 master-1 kubenswrapper[4740]: I1014 13:12:25.084456 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:25.771387 master-1 kubenswrapper[4740]: I1014 13:12:25.771273 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:25.771387 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:25.771387 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:25.771387 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:25.771913 master-1 kubenswrapper[4740]: I1014 13:12:25.771401 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:26.770632 master-1 kubenswrapper[4740]: I1014 13:12:26.770479 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:26.770632 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:26.770632 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:26.770632 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:26.771682 master-1 kubenswrapper[4740]: I1014 13:12:26.770669 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:27.771582 master-1 kubenswrapper[4740]: I1014 13:12:27.771508 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:27.771582 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:27.771582 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:27.771582 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:27.772177 master-1 kubenswrapper[4740]: I1014 13:12:27.771585 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:27.994345 master-1 kubenswrapper[4740]: I1014 13:12:27.994287 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-96c4c446c-brl6n"] Oct 14 13:12:27.994609 master-1 kubenswrapper[4740]: I1014 13:12:27.994582 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" containerID="cri-o://c5cd1b05a00ba84888e4a60b94053728d4fbb75e95c5e2e3f17dac5202720621" gracePeriod=120 Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: I1014 13:12:28.030341 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:28.030545 master-1 kubenswrapper[4740]: I1014 13:12:28.030435 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:28.771299 master-1 kubenswrapper[4740]: I1014 13:12:28.771200 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:28.771299 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:28.771299 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:28.771299 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:28.771637 master-1 kubenswrapper[4740]: I1014 13:12:28.771327 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:29.771147 master-1 kubenswrapper[4740]: I1014 13:12:29.771084 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:29.771147 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:29.771147 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:29.771147 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:29.771147 master-1 kubenswrapper[4740]: I1014 13:12:29.771160 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: I1014 13:12:30.083812 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:30.083977 master-1 kubenswrapper[4740]: I1014 13:12:30.083898 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:30.770491 master-1 kubenswrapper[4740]: I1014 13:12:30.770394 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:30.770491 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:30.770491 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:30.770491 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:30.770948 master-1 kubenswrapper[4740]: I1014 13:12:30.770525 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:31.772191 master-1 kubenswrapper[4740]: I1014 13:12:31.772125 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:31.772191 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:31.772191 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:31.772191 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:31.772191 master-1 kubenswrapper[4740]: I1014 13:12:31.772209 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:32.770638 master-1 kubenswrapper[4740]: I1014 13:12:32.770568 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:32.770638 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:32.770638 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:32.770638 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:32.771523 master-1 kubenswrapper[4740]: I1014 13:12:32.770651 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: I1014 13:12:33.031975 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:33.032157 master-1 kubenswrapper[4740]: I1014 13:12:33.032065 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:33.771827 master-1 kubenswrapper[4740]: I1014 13:12:33.771749 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:33.771827 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:33.771827 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:33.771827 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:33.771827 master-1 kubenswrapper[4740]: I1014 13:12:33.771822 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:33.814714 master-1 kubenswrapper[4740]: I1014 13:12:33.814626 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-3-master-1"] Oct 14 13:12:33.815774 master-1 kubenswrapper[4740]: I1014 13:12:33.815718 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.820069 master-1 kubenswrapper[4740]: I1014 13:12:33.820008 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbs2c" Oct 14 13:12:33.831384 master-1 kubenswrapper[4740]: I1014 13:12:33.831298 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-3-master-1"] Oct 14 13:12:33.894378 master-1 kubenswrapper[4740]: I1014 13:12:33.894301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c5ffc3-1817-4a99-9e46-a205827a136d-kube-api-access\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.894894 master-1 kubenswrapper[4740]: I1014 13:12:33.894863 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.895199 master-1 kubenswrapper[4740]: I1014 13:12:33.895170 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-var-lock\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.996900 master-1 kubenswrapper[4740]: I1014 13:12:33.996778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-var-lock\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.997318 master-1 kubenswrapper[4740]: I1014 13:12:33.996950 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c5ffc3-1817-4a99-9e46-a205827a136d-kube-api-access\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.997318 master-1 kubenswrapper[4740]: I1014 13:12:33.997002 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.997318 master-1 kubenswrapper[4740]: I1014 13:12:33.997042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-var-lock\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:33.997318 master-1 kubenswrapper[4740]: I1014 13:12:33.997118 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-kubelet-dir\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:34.032819 master-1 kubenswrapper[4740]: I1014 13:12:34.032694 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c5ffc3-1817-4a99-9e46-a205827a136d-kube-api-access\") pod \"installer-3-master-1\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:34.133603 master-1 kubenswrapper[4740]: I1014 13:12:34.133538 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-1" Oct 14 13:12:34.589114 master-1 kubenswrapper[4740]: I1014 13:12:34.589041 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-3-master-1"] Oct 14 13:12:34.595657 master-1 kubenswrapper[4740]: W1014 13:12:34.595571 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod10c5ffc3_1817_4a99_9e46_a205827a136d.slice/crio-e689e3649ba4e5dc0b6cf33d2f4be69aa218ee8b24a424ec681ac5cb02e9557e WatchSource:0}: Error finding container e689e3649ba4e5dc0b6cf33d2f4be69aa218ee8b24a424ec681ac5cb02e9557e: Status 404 returned error can't find the container with id e689e3649ba4e5dc0b6cf33d2f4be69aa218ee8b24a424ec681ac5cb02e9557e Oct 14 13:12:34.771641 master-1 kubenswrapper[4740]: I1014 13:12:34.771575 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:34.771641 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:34.771641 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:34.771641 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:34.772007 master-1 kubenswrapper[4740]: I1014 13:12:34.771654 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:34.968711 master-1 kubenswrapper[4740]: I1014 13:12:34.967411 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-1" event={"ID":"10c5ffc3-1817-4a99-9e46-a205827a136d","Type":"ContainerStarted","Data":"663fc829394d2f5a3ee391939cefd98acd80028126df6598ef23664bfcff9269"} Oct 14 13:12:34.968711 master-1 kubenswrapper[4740]: I1014 13:12:34.967475 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-1" event={"ID":"10c5ffc3-1817-4a99-9e46-a205827a136d","Type":"ContainerStarted","Data":"e689e3649ba4e5dc0b6cf33d2f4be69aa218ee8b24a424ec681ac5cb02e9557e"} Oct 14 13:12:34.993709 master-1 kubenswrapper[4740]: I1014 13:12:34.991319 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-3-master-1" podStartSLOduration=1.991297312 podStartE2EDuration="1.991297312s" podCreationTimestamp="2025-10-14 13:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:12:34.987117636 +0000 UTC m=+380.797407045" watchObservedRunningTime="2025-10-14 13:12:34.991297312 +0000 UTC m=+380.801586651" Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: I1014 13:12:35.084261 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:35.084375 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:35.085599 master-1 kubenswrapper[4740]: I1014 13:12:35.084387 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:35.771536 master-1 kubenswrapper[4740]: I1014 13:12:35.771455 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:35.771536 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:35.771536 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:35.771536 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:35.771536 master-1 kubenswrapper[4740]: I1014 13:12:35.771539 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:36.771663 master-1 kubenswrapper[4740]: I1014 13:12:36.771540 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:36.771663 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:36.771663 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:36.771663 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:36.771663 master-1 kubenswrapper[4740]: I1014 13:12:36.771642 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:37.772025 master-1 kubenswrapper[4740]: I1014 13:12:37.771906 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:37.772025 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:37.772025 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:37.772025 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:37.772025 master-1 kubenswrapper[4740]: I1014 13:12:37.771997 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: I1014 13:12:38.033054 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:38.033214 master-1 kubenswrapper[4740]: I1014 13:12:38.033134 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:38.034067 master-1 kubenswrapper[4740]: I1014 13:12:38.033276 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:12:38.771553 master-1 kubenswrapper[4740]: I1014 13:12:38.771440 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:38.771553 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:38.771553 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:38.771553 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:38.772043 master-1 kubenswrapper[4740]: I1014 13:12:38.771556 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:39.772122 master-1 kubenswrapper[4740]: I1014 13:12:39.772014 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:39.772122 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:39.772122 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:39.772122 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:39.772911 master-1 kubenswrapper[4740]: I1014 13:12:39.772146 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:39.986530 master-1 kubenswrapper[4740]: E1014 13:12:39.986360 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podUID="cc579fa5-c1e0-40ed-b1f3-e953a42e74d6" Oct 14 13:12:39.986530 master-1 kubenswrapper[4740]: E1014 13:12:39.986360 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podUID="180ced15-1fb1-464d-85f2-0bcc0d836dab" Oct 14 13:12:40.002824 master-1 kubenswrapper[4740]: I1014 13:12:40.002766 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:12:40.002824 master-1 kubenswrapper[4740]: I1014 13:12:40.002813 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: I1014 13:12:40.085380 4740 patch_prober.go:28] interesting pod/apiserver-6576f6bc9d-xfzjr container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:40.085624 master-1 kubenswrapper[4740]: I1014 13:12:40.085471 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:40.095584 master-1 kubenswrapper[4740]: I1014 13:12:40.095421 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:12:40.771589 master-1 kubenswrapper[4740]: I1014 13:12:40.771526 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:40.771589 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:40.771589 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:40.771589 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:40.772194 master-1 kubenswrapper[4740]: I1014 13:12:40.772151 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:41.771548 master-1 kubenswrapper[4740]: I1014 13:12:41.771470 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:41.771548 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:41.771548 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:41.771548 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:41.771548 master-1 kubenswrapper[4740]: I1014 13:12:41.771578 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:42.020563 master-1 kubenswrapper[4740]: I1014 13:12:42.020387 4740 generic.go:334] "Generic (PLEG): container finished" podID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerID="2a4c2ed2bbbd4797e6180de90b1ee5e438d370126f0614ca02705325ec43d7bf" exitCode=0 Oct 14 13:12:42.020563 master-1 kubenswrapper[4740]: I1014 13:12:42.020424 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerDied","Data":"2a4c2ed2bbbd4797e6180de90b1ee5e438d370126f0614ca02705325ec43d7bf"} Oct 14 13:12:42.511360 master-1 kubenswrapper[4740]: I1014 13:12:42.511090 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:12:42.614566 master-1 kubenswrapper[4740]: I1014 13:12:42.611613 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-3-master-1"] Oct 14 13:12:42.614566 master-1 kubenswrapper[4740]: I1014 13:12:42.612003 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-3-master-1" podUID="10c5ffc3-1817-4a99-9e46-a205827a136d" containerName="installer" containerID="cri-o://663fc829394d2f5a3ee391939cefd98acd80028126df6598ef23664bfcff9269" gracePeriod=30 Oct 14 13:12:42.656701 master-1 kubenswrapper[4740]: I1014 13:12:42.656591 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-serving-cert\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.656701 master-1 kubenswrapper[4740]: I1014 13:12:42.656694 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-encryption-config\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.656736 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-serving-ca\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.656787 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-config\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.656825 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-node-pullsecrets\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.656906 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9svb\" (UniqueName: \"kubernetes.io/projected/ed68870d-0f75-4bac-8f5e-36016becfd08-kube-api-access-l9svb\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.656954 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-image-import-ca\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.657001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-audit-dir\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.657067 master-1 kubenswrapper[4740]: I1014 13:12:42.657051 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-trusted-ca-bundle\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.657097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-audit\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.657137 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-client\") pod \"ed68870d-0f75-4bac-8f5e-36016becfd08\" (UID: \"ed68870d-0f75-4bac-8f5e-36016becfd08\") " Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.657186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.657444 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.657565 4740 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.658514 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.658817 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.658854 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-audit" (OuterVolumeSpecName: "audit") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.659763 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-config" (OuterVolumeSpecName: "config") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:42.660459 master-1 kubenswrapper[4740]: I1014 13:12:42.659784 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:12:42.662473 master-1 kubenswrapper[4740]: I1014 13:12:42.662425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:42.662562 master-1 kubenswrapper[4740]: I1014 13:12:42.662497 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:42.663413 master-1 kubenswrapper[4740]: I1014 13:12:42.663317 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:12:42.665746 master-1 kubenswrapper[4740]: I1014 13:12:42.665564 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed68870d-0f75-4bac-8f5e-36016becfd08-kube-api-access-l9svb" (OuterVolumeSpecName: "kube-api-access-l9svb") pod "ed68870d-0f75-4bac-8f5e-36016becfd08" (UID: "ed68870d-0f75-4bac-8f5e-36016becfd08"). InnerVolumeSpecName "kube-api-access-l9svb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:12:42.758682 master-1 kubenswrapper[4740]: I1014 13:12:42.758585 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed68870d-0f75-4bac-8f5e-36016becfd08-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.758682 master-1 kubenswrapper[4740]: I1014 13:12:42.758655 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.758682 master-1 kubenswrapper[4740]: I1014 13:12:42.758674 4740 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-audit\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.758682 master-1 kubenswrapper[4740]: I1014 13:12:42.758692 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.759009 master-1 kubenswrapper[4740]: I1014 13:12:42.758710 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.759009 master-1 kubenswrapper[4740]: I1014 13:12:42.758727 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed68870d-0f75-4bac-8f5e-36016becfd08-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.759009 master-1 kubenswrapper[4740]: I1014 13:12:42.758745 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.759009 master-1 kubenswrapper[4740]: I1014 13:12:42.758764 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.759009 master-1 kubenswrapper[4740]: I1014 13:12:42.758782 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9svb\" (UniqueName: \"kubernetes.io/projected/ed68870d-0f75-4bac-8f5e-36016becfd08-kube-api-access-l9svb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.759009 master-1 kubenswrapper[4740]: I1014 13:12:42.758798 4740 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ed68870d-0f75-4bac-8f5e-36016becfd08-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:42.770949 master-1 kubenswrapper[4740]: I1014 13:12:42.770665 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:42.770949 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:42.770949 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:42.770949 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:42.770949 master-1 kubenswrapper[4740]: I1014 13:12:42.770731 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:43.030048 master-1 kubenswrapper[4740]: I1014 13:12:43.029968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" event={"ID":"ed68870d-0f75-4bac-8f5e-36016becfd08","Type":"ContainerDied","Data":"2b3581889f1f846473a9dd583060d70caa3514018ccfe65e18619f5e6369bcf8"} Oct 14 13:12:43.030048 master-1 kubenswrapper[4740]: I1014 13:12:43.030041 4740 scope.go:117] "RemoveContainer" containerID="12d72bb9d4324b183104d8033fbb4b64412be63d92c608ad75fd099e5f63f4a7" Oct 14 13:12:43.031058 master-1 kubenswrapper[4740]: I1014 13:12:43.030206 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6576f6bc9d-xfzjr" Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: I1014 13:12:43.034656 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:43.034747 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:43.035830 master-1 kubenswrapper[4740]: I1014 13:12:43.034754 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:43.054571 master-1 kubenswrapper[4740]: I1014 13:12:43.054511 4740 scope.go:117] "RemoveContainer" containerID="2a4c2ed2bbbd4797e6180de90b1ee5e438d370126f0614ca02705325ec43d7bf" Oct 14 13:12:43.081984 master-1 kubenswrapper[4740]: I1014 13:12:43.081939 4740 scope.go:117] "RemoveContainer" containerID="50e09bd480a9486fece5adcc3edd27b4717e755898d98236cb8e5ad7102da2a0" Oct 14 13:12:43.093744 master-1 kubenswrapper[4740]: I1014 13:12:43.093668 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-6576f6bc9d-xfzjr"] Oct 14 13:12:43.110638 master-1 kubenswrapper[4740]: I1014 13:12:43.110542 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-6576f6bc9d-xfzjr"] Oct 14 13:12:43.469935 master-1 kubenswrapper[4740]: I1014 13:12:43.469783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:12:43.470343 master-1 kubenswrapper[4740]: E1014 13:12:43.470080 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:14:45.470038315 +0000 UTC m=+511.280327684 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:12:43.572530 master-1 kubenswrapper[4740]: I1014 13:12:43.572280 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:12:43.572530 master-1 kubenswrapper[4740]: E1014 13:12:43.572518 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:14:45.572477498 +0000 UTC m=+511.382766857 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:12:43.771556 master-1 kubenswrapper[4740]: I1014 13:12:43.771445 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:43.771556 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:43.771556 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:43.771556 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:43.772168 master-1 kubenswrapper[4740]: I1014 13:12:43.771573 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:44.771711 master-1 kubenswrapper[4740]: I1014 13:12:44.771636 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:44.771711 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:44.771711 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:44.771711 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:44.772967 master-1 kubenswrapper[4740]: I1014 13:12:44.771719 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:44.956204 master-1 kubenswrapper[4740]: I1014 13:12:44.956097 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" path="/var/lib/kubelet/pods/ed68870d-0f75-4bac-8f5e-36016becfd08/volumes" Oct 14 13:12:45.771524 master-1 kubenswrapper[4740]: I1014 13:12:45.771421 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:45.771524 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:45.771524 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:45.771524 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:45.772203 master-1 kubenswrapper[4740]: I1014 13:12:45.771561 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:46.771651 master-1 kubenswrapper[4740]: I1014 13:12:46.771558 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:46.771651 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:46.771651 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:46.771651 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:46.772572 master-1 kubenswrapper[4740]: I1014 13:12:46.771707 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:47.010515 master-1 kubenswrapper[4740]: I1014 13:12:47.010435 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-4-master-1"] Oct 14 13:12:47.010801 master-1 kubenswrapper[4740]: E1014 13:12:47.010766 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver-check-endpoints" Oct 14 13:12:47.010801 master-1 kubenswrapper[4740]: I1014 13:12:47.010798 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver-check-endpoints" Oct 14 13:12:47.010880 master-1 kubenswrapper[4740]: E1014 13:12:47.010827 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="fix-audit-permissions" Oct 14 13:12:47.010880 master-1 kubenswrapper[4740]: I1014 13:12:47.010842 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="fix-audit-permissions" Oct 14 13:12:47.010880 master-1 kubenswrapper[4740]: E1014 13:12:47.010861 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" Oct 14 13:12:47.010880 master-1 kubenswrapper[4740]: I1014 13:12:47.010874 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" Oct 14 13:12:47.011086 master-1 kubenswrapper[4740]: I1014 13:12:47.011056 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver" Oct 14 13:12:47.011126 master-1 kubenswrapper[4740]: I1014 13:12:47.011088 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed68870d-0f75-4bac-8f5e-36016becfd08" containerName="openshift-apiserver-check-endpoints" Oct 14 13:12:47.011884 master-1 kubenswrapper[4740]: I1014 13:12:47.011845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.025971 master-1 kubenswrapper[4740]: I1014 13:12:47.025828 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-4-master-1"] Oct 14 13:12:47.126563 master-1 kubenswrapper[4740]: I1014 13:12:47.126465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-var-lock\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.126563 master-1 kubenswrapper[4740]: I1014 13:12:47.126542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kube-api-access\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.126563 master-1 kubenswrapper[4740]: I1014 13:12:47.126573 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.228590 master-1 kubenswrapper[4740]: I1014 13:12:47.228476 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-var-lock\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.228590 master-1 kubenswrapper[4740]: I1014 13:12:47.228566 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kube-api-access\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.228989 master-1 kubenswrapper[4740]: I1014 13:12:47.228621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.228989 master-1 kubenswrapper[4740]: I1014 13:12:47.228674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-var-lock\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.228989 master-1 kubenswrapper[4740]: I1014 13:12:47.228737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.250328 master-1 kubenswrapper[4740]: I1014 13:12:47.250214 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kube-api-access\") pod \"installer-4-master-1\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.364084 master-1 kubenswrapper[4740]: I1014 13:12:47.363879 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:47.483351 master-1 kubenswrapper[4740]: I1014 13:12:47.482790 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-595d5f74d8-hck8v"] Oct 14 13:12:47.485208 master-1 kubenswrapper[4740]: I1014 13:12:47.485176 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.490810 master-1 kubenswrapper[4740]: I1014 13:12:47.490433 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 13:12:47.490810 master-1 kubenswrapper[4740]: I1014 13:12:47.490641 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 13:12:47.491008 master-1 kubenswrapper[4740]: I1014 13:12:47.490967 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 13:12:47.492122 master-1 kubenswrapper[4740]: I1014 13:12:47.491562 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 13:12:47.492122 master-1 kubenswrapper[4740]: I1014 13:12:47.491923 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 13:12:47.492271 master-1 kubenswrapper[4740]: I1014 13:12:47.492257 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 13:12:47.492518 master-1 kubenswrapper[4740]: I1014 13:12:47.492478 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 13:12:47.492774 master-1 kubenswrapper[4740]: I1014 13:12:47.492740 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 13:12:47.493302 master-1 kubenswrapper[4740]: I1014 13:12:47.493222 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-95k8q" Oct 14 13:12:47.493550 master-1 kubenswrapper[4740]: I1014 13:12:47.493506 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 13:12:47.502786 master-1 kubenswrapper[4740]: I1014 13:12:47.501465 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-595d5f74d8-hck8v"] Oct 14 13:12:47.510292 master-1 kubenswrapper[4740]: I1014 13:12:47.510222 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 13:12:47.634430 master-1 kubenswrapper[4740]: I1014 13:12:47.634272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-serving-ca\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634430 master-1 kubenswrapper[4740]: I1014 13:12:47.634339 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-audit-dir\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634430 master-1 kubenswrapper[4740]: I1014 13:12:47.634377 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-config\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634430 master-1 kubenswrapper[4740]: I1014 13:12:47.634430 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgp4t\" (UniqueName: \"kubernetes.io/projected/a0a34636-f938-4d5d-952c-68b1433d01cc-kube-api-access-tgp4t\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-trusted-ca-bundle\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634485 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-audit\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634510 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-node-pullsecrets\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634548 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-encryption-config\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634572 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-client\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-serving-cert\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.634763 master-1 kubenswrapper[4740]: I1014 13:12:47.634659 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-image-import-ca\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736367 master-1 kubenswrapper[4740]: I1014 13:12:47.736285 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-audit\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736367 master-1 kubenswrapper[4740]: I1014 13:12:47.736354 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-node-pullsecrets\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-encryption-config\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736430 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-client\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736494 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-serving-cert\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736528 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-image-import-ca\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-node-pullsecrets\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-serving-ca\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-audit-dir\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736615 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-config\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736641 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-trusted-ca-bundle\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.736687 master-1 kubenswrapper[4740]: I1014 13:12:47.736670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgp4t\" (UniqueName: \"kubernetes.io/projected/a0a34636-f938-4d5d-952c-68b1433d01cc-kube-api-access-tgp4t\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.737421 master-1 kubenswrapper[4740]: I1014 13:12:47.737334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-audit-dir\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.737659 master-1 kubenswrapper[4740]: I1014 13:12:47.737613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-image-import-ca\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.737915 master-1 kubenswrapper[4740]: I1014 13:12:47.737866 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-audit\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.738554 master-1 kubenswrapper[4740]: I1014 13:12:47.738498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-serving-ca\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.739062 master-1 kubenswrapper[4740]: I1014 13:12:47.739012 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-config\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.740818 master-1 kubenswrapper[4740]: I1014 13:12:47.740756 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-trusted-ca-bundle\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.751881 master-1 kubenswrapper[4740]: I1014 13:12:47.751826 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-encryption-config\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.751974 master-1 kubenswrapper[4740]: I1014 13:12:47.751893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-client\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.752187 master-1 kubenswrapper[4740]: I1014 13:12:47.752126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-serving-cert\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.757759 master-1 kubenswrapper[4740]: I1014 13:12:47.757701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgp4t\" (UniqueName: \"kubernetes.io/projected/a0a34636-f938-4d5d-952c-68b1433d01cc-kube-api-access-tgp4t\") pod \"apiserver-595d5f74d8-hck8v\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.770535 master-1 kubenswrapper[4740]: I1014 13:12:47.770440 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:47.770535 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:47.770535 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:47.770535 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:47.770733 master-1 kubenswrapper[4740]: I1014 13:12:47.770602 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:47.806443 master-1 kubenswrapper[4740]: I1014 13:12:47.806355 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:47.849513 master-1 kubenswrapper[4740]: I1014 13:12:47.847841 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-4-master-1"] Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: I1014 13:12:48.029812 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:48.029906 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:48.030461 master-1 kubenswrapper[4740]: I1014 13:12:48.029922 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:48.070770 master-1 kubenswrapper[4740]: I1014 13:12:48.070705 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-4-master-1" event={"ID":"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d","Type":"ContainerStarted","Data":"cd874dc8564e2888563dc3a484cbfba3af1f1f6dfb7ada9d8b6680f14bc7a81c"} Oct 14 13:12:48.269968 master-1 kubenswrapper[4740]: I1014 13:12:48.269900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-595d5f74d8-hck8v"] Oct 14 13:12:48.287042 master-1 kubenswrapper[4740]: W1014 13:12:48.286966 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0a34636_f938_4d5d_952c_68b1433d01cc.slice/crio-b18bff52d4d529f9e5b8390d13649b4b130d79b766b7f0cd81c86ad46f6aee87 WatchSource:0}: Error finding container b18bff52d4d529f9e5b8390d13649b4b130d79b766b7f0cd81c86ad46f6aee87: Status 404 returned error can't find the container with id b18bff52d4d529f9e5b8390d13649b4b130d79b766b7f0cd81c86ad46f6aee87 Oct 14 13:12:48.771492 master-1 kubenswrapper[4740]: I1014 13:12:48.771300 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:48.771492 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:48.771492 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:48.771492 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:48.771492 master-1 kubenswrapper[4740]: I1014 13:12:48.771407 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:49.087141 master-1 kubenswrapper[4740]: I1014 13:12:49.086948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-4-master-1" event={"ID":"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d","Type":"ContainerStarted","Data":"8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6"} Oct 14 13:12:49.089027 master-1 kubenswrapper[4740]: I1014 13:12:49.088963 4740 generic.go:334] "Generic (PLEG): container finished" podID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerID="e7b6632cec156bb361e2d5f2986265a8f548f804f2296c2d5dc4f2d8ae5613d7" exitCode=0 Oct 14 13:12:49.089027 master-1 kubenswrapper[4740]: I1014 13:12:49.089012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerDied","Data":"e7b6632cec156bb361e2d5f2986265a8f548f804f2296c2d5dc4f2d8ae5613d7"} Oct 14 13:12:49.089027 master-1 kubenswrapper[4740]: I1014 13:12:49.089034 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerStarted","Data":"b18bff52d4d529f9e5b8390d13649b4b130d79b766b7f0cd81c86ad46f6aee87"} Oct 14 13:12:49.112141 master-1 kubenswrapper[4740]: I1014 13:12:49.112047 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-4-master-1" podStartSLOduration=3.112024803 podStartE2EDuration="3.112024803s" podCreationTimestamp="2025-10-14 13:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:12:49.109356553 +0000 UTC m=+394.919645912" watchObservedRunningTime="2025-10-14 13:12:49.112024803 +0000 UTC m=+394.922314142" Oct 14 13:12:49.771157 master-1 kubenswrapper[4740]: I1014 13:12:49.771079 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:49.771157 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:49.771157 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:49.771157 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:49.771453 master-1 kubenswrapper[4740]: I1014 13:12:49.771159 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:50.096845 master-1 kubenswrapper[4740]: I1014 13:12:50.096781 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerStarted","Data":"1d3ba628773d880348e99b016c5d83127177dbbd2f44204a133e0dcdcec7087c"} Oct 14 13:12:50.097534 master-1 kubenswrapper[4740]: I1014 13:12:50.096856 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerStarted","Data":"194c25a7f27d321abe7b43f432aa05c8f7acba7f239a24bf7b4072916b25b5f2"} Oct 14 13:12:50.119910 master-1 kubenswrapper[4740]: I1014 13:12:50.119816 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podStartSLOduration=3.119795502 podStartE2EDuration="3.119795502s" podCreationTimestamp="2025-10-14 13:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:12:50.115887669 +0000 UTC m=+395.926176998" watchObservedRunningTime="2025-10-14 13:12:50.119795502 +0000 UTC m=+395.930084831" Oct 14 13:12:50.771622 master-1 kubenswrapper[4740]: I1014 13:12:50.771501 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:50.771622 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:50.771622 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:50.771622 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:50.771622 master-1 kubenswrapper[4740]: I1014 13:12:50.771596 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:51.113276 master-1 kubenswrapper[4740]: I1014 13:12:51.113069 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/1.log" Oct 14 13:12:51.114690 master-1 kubenswrapper[4740]: I1014 13:12:51.114574 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/0.log" Oct 14 13:12:51.114877 master-1 kubenswrapper[4740]: I1014 13:12:51.114801 4740 generic.go:334] "Generic (PLEG): container finished" podID="398ba6fd-0f8f-46af-b690-61a6eec9176b" containerID="4642cf87216d34a41602fbb9cf593d0d329fd43c67ed7b264d9a3b2b3022daaf" exitCode=1 Oct 14 13:12:51.115002 master-1 kubenswrapper[4740]: I1014 13:12:51.114935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerDied","Data":"4642cf87216d34a41602fbb9cf593d0d329fd43c67ed7b264d9a3b2b3022daaf"} Oct 14 13:12:51.115105 master-1 kubenswrapper[4740]: I1014 13:12:51.115088 4740 scope.go:117] "RemoveContainer" containerID="8c02147a25c6590fc2f39f47ab7a6cfafc0656844334bfba1f068b3fe5d01610" Oct 14 13:12:51.115970 master-1 kubenswrapper[4740]: I1014 13:12:51.115911 4740 scope.go:117] "RemoveContainer" containerID="4642cf87216d34a41602fbb9cf593d0d329fd43c67ed7b264d9a3b2b3022daaf" Oct 14 13:12:51.116436 master-1 kubenswrapper[4740]: E1014 13:12:51.116395 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-xhdjt_openshift-ingress-operator(398ba6fd-0f8f-46af-b690-61a6eec9176b)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" podUID="398ba6fd-0f8f-46af-b690-61a6eec9176b" Oct 14 13:12:51.771460 master-1 kubenswrapper[4740]: I1014 13:12:51.771355 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:51.771460 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:51.771460 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:51.771460 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:51.771820 master-1 kubenswrapper[4740]: I1014 13:12:51.771481 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:52.124756 master-1 kubenswrapper[4740]: I1014 13:12:52.124567 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/1.log" Oct 14 13:12:52.770934 master-1 kubenswrapper[4740]: I1014 13:12:52.770828 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:52.770934 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:52.770934 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:52.770934 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:52.770934 master-1 kubenswrapper[4740]: I1014 13:12:52.770922 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:52.807133 master-1 kubenswrapper[4740]: I1014 13:12:52.807059 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:52.807133 master-1 kubenswrapper[4740]: I1014 13:12:52.807143 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:52.819439 master-1 kubenswrapper[4740]: I1014 13:12:52.819118 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: I1014 13:12:53.032068 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:53.032363 master-1 kubenswrapper[4740]: I1014 13:12:53.032204 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:53.140732 master-1 kubenswrapper[4740]: I1014 13:12:53.140645 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:12:53.770742 master-1 kubenswrapper[4740]: I1014 13:12:53.770680 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:53.770742 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:53.770742 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:53.770742 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:53.771217 master-1 kubenswrapper[4740]: I1014 13:12:53.771183 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:54.611712 master-1 kubenswrapper[4740]: I1014 13:12:54.609821 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-4-master-1"] Oct 14 13:12:54.611712 master-1 kubenswrapper[4740]: I1014 13:12:54.610079 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-4-master-1" podUID="658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" containerName="installer" containerID="cri-o://8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6" gracePeriod=30 Oct 14 13:12:54.771239 master-1 kubenswrapper[4740]: I1014 13:12:54.771175 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:54.771239 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:54.771239 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:54.771239 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:54.771481 master-1 kubenswrapper[4740]: I1014 13:12:54.771306 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:55.038103 master-1 kubenswrapper[4740]: I1014 13:12:55.037531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:12:55.109850 master-1 kubenswrapper[4740]: I1014 13:12:55.109725 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-4-master-1_658a2b8e-ec59-4f9a-86b5-c86483cc8e3d/installer/0.log" Oct 14 13:12:55.110395 master-1 kubenswrapper[4740]: I1014 13:12:55.109939 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:55.155888 master-1 kubenswrapper[4740]: I1014 13:12:55.155846 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-4-master-1_658a2b8e-ec59-4f9a-86b5-c86483cc8e3d/installer/0.log" Oct 14 13:12:55.156170 master-1 kubenswrapper[4740]: I1014 13:12:55.155898 4740 generic.go:334] "Generic (PLEG): container finished" podID="658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" containerID="8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6" exitCode=1 Oct 14 13:12:55.156170 master-1 kubenswrapper[4740]: I1014 13:12:55.155926 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-4-master-1" event={"ID":"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d","Type":"ContainerDied","Data":"8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6"} Oct 14 13:12:55.156170 master-1 kubenswrapper[4740]: I1014 13:12:55.155955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-4-master-1" event={"ID":"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d","Type":"ContainerDied","Data":"cd874dc8564e2888563dc3a484cbfba3af1f1f6dfb7ada9d8b6680f14bc7a81c"} Oct 14 13:12:55.156170 master-1 kubenswrapper[4740]: I1014 13:12:55.155973 4740 scope.go:117] "RemoveContainer" containerID="8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6" Oct 14 13:12:55.156170 master-1 kubenswrapper[4740]: I1014 13:12:55.156081 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-4-master-1" Oct 14 13:12:55.170516 master-1 kubenswrapper[4740]: I1014 13:12:55.170471 4740 scope.go:117] "RemoveContainer" containerID="8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6" Oct 14 13:12:55.171135 master-1 kubenswrapper[4740]: E1014 13:12:55.171079 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6\": container with ID starting with 8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6 not found: ID does not exist" containerID="8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6" Oct 14 13:12:55.171226 master-1 kubenswrapper[4740]: I1014 13:12:55.171144 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6"} err="failed to get container status \"8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6\": rpc error: code = NotFound desc = could not find container \"8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6\": container with ID starting with 8cb941e5cdb04b01046af694ec31a8c06a675e3c693df5b490e16d5148055bf6 not found: ID does not exist" Oct 14 13:12:55.256408 master-1 kubenswrapper[4740]: I1014 13:12:55.256290 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kubelet-dir\") pod \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " Oct 14 13:12:55.256408 master-1 kubenswrapper[4740]: I1014 13:12:55.256381 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kube-api-access\") pod \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " Oct 14 13:12:55.256408 master-1 kubenswrapper[4740]: I1014 13:12:55.256434 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-var-lock\") pod \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\" (UID: \"658a2b8e-ec59-4f9a-86b5-c86483cc8e3d\") " Oct 14 13:12:55.256751 master-1 kubenswrapper[4740]: I1014 13:12:55.256444 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" (UID: "658a2b8e-ec59-4f9a-86b5-c86483cc8e3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:12:55.256751 master-1 kubenswrapper[4740]: I1014 13:12:55.256649 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-var-lock" (OuterVolumeSpecName: "var-lock") pod "658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" (UID: "658a2b8e-ec59-4f9a-86b5-c86483cc8e3d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:12:55.256751 master-1 kubenswrapper[4740]: I1014 13:12:55.256706 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:55.259314 master-1 kubenswrapper[4740]: I1014 13:12:55.259258 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" (UID: "658a2b8e-ec59-4f9a-86b5-c86483cc8e3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:12:55.358971 master-1 kubenswrapper[4740]: I1014 13:12:55.358743 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:55.358971 master-1 kubenswrapper[4740]: I1014 13:12:55.358822 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:12:55.492212 master-1 kubenswrapper[4740]: I1014 13:12:55.492136 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-4-master-1"] Oct 14 13:12:55.495723 master-1 kubenswrapper[4740]: I1014 13:12:55.495673 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-4-master-1"] Oct 14 13:12:55.770187 master-1 kubenswrapper[4740]: I1014 13:12:55.770110 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:55.770187 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:55.770187 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:55.770187 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:55.770868 master-1 kubenswrapper[4740]: I1014 13:12:55.770205 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:56.771420 master-1 kubenswrapper[4740]: I1014 13:12:56.771343 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:56.771420 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:56.771420 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:56.771420 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:56.772093 master-1 kubenswrapper[4740]: I1014 13:12:56.771443 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:56.956143 master-1 kubenswrapper[4740]: I1014 13:12:56.956069 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" path="/var/lib/kubelet/pods/658a2b8e-ec59-4f9a-86b5-c86483cc8e3d/volumes" Oct 14 13:12:57.771420 master-1 kubenswrapper[4740]: I1014 13:12:57.771345 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:57.771420 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:57.771420 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:57.771420 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:57.772567 master-1 kubenswrapper[4740]: I1014 13:12:57.772441 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:57.807999 master-1 kubenswrapper[4740]: I1014 13:12:57.807927 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 14 13:12:57.809199 master-1 kubenswrapper[4740]: E1014 13:12:57.809112 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" containerName="installer" Oct 14 13:12:57.809199 master-1 kubenswrapper[4740]: I1014 13:12:57.809142 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" containerName="installer" Oct 14 13:12:57.809463 master-1 kubenswrapper[4740]: I1014 13:12:57.809301 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="658a2b8e-ec59-4f9a-86b5-c86483cc8e3d" containerName="installer" Oct 14 13:12:57.809992 master-1 kubenswrapper[4740]: I1014 13:12:57.809918 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.828068 master-1 kubenswrapper[4740]: I1014 13:12:57.828015 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 14 13:12:57.894080 master-1 kubenswrapper[4740]: I1014 13:12:57.893984 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-var-lock\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.894080 master-1 kubenswrapper[4740]: I1014 13:12:57.894059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6927f794-9b47-4a35-b412-78b7d24f7622-kube-api-access\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.894391 master-1 kubenswrapper[4740]: I1014 13:12:57.894301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.995857 master-1 kubenswrapper[4740]: I1014 13:12:57.995685 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-var-lock\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.995857 master-1 kubenswrapper[4740]: I1014 13:12:57.995799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6927f794-9b47-4a35-b412-78b7d24f7622-kube-api-access\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.996457 master-1 kubenswrapper[4740]: I1014 13:12:57.995894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.996457 master-1 kubenswrapper[4740]: I1014 13:12:57.995898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-var-lock\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:57.996457 master-1 kubenswrapper[4740]: I1014 13:12:57.996035 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:58.022092 master-1 kubenswrapper[4740]: I1014 13:12:58.021749 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6927f794-9b47-4a35-b412-78b7d24f7622-kube-api-access\") pod \"installer-5-master-1\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: I1014 13:12:58.034623 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:12:58.035315 master-1 kubenswrapper[4740]: I1014 13:12:58.035164 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:58.142488 master-1 kubenswrapper[4740]: I1014 13:12:58.142387 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 14 13:12:58.683335 master-1 kubenswrapper[4740]: I1014 13:12:58.683252 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 14 13:12:58.686555 master-1 kubenswrapper[4740]: W1014 13:12:58.686475 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6927f794_9b47_4a35_b412_78b7d24f7622.slice/crio-b5038ca3ac631f1730a63909f85795ba3c9a6f687bb25a0eb0d359b36f9a7853 WatchSource:0}: Error finding container b5038ca3ac631f1730a63909f85795ba3c9a6f687bb25a0eb0d359b36f9a7853: Status 404 returned error can't find the container with id b5038ca3ac631f1730a63909f85795ba3c9a6f687bb25a0eb0d359b36f9a7853 Oct 14 13:12:58.787692 master-1 kubenswrapper[4740]: I1014 13:12:58.782458 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:58.787692 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:58.787692 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:58.787692 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:58.787692 master-1 kubenswrapper[4740]: I1014 13:12:58.782537 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:12:59.187139 master-1 kubenswrapper[4740]: I1014 13:12:59.187012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"6927f794-9b47-4a35-b412-78b7d24f7622","Type":"ContainerStarted","Data":"b5038ca3ac631f1730a63909f85795ba3c9a6f687bb25a0eb0d359b36f9a7853"} Oct 14 13:12:59.771265 master-1 kubenswrapper[4740]: I1014 13:12:59.771163 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:12:59.771265 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:12:59.771265 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:12:59.771265 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:12:59.771764 master-1 kubenswrapper[4740]: I1014 13:12:59.771284 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:00.197500 master-1 kubenswrapper[4740]: I1014 13:13:00.197120 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"6927f794-9b47-4a35-b412-78b7d24f7622","Type":"ContainerStarted","Data":"2b8339850f796f4cefb3b4fee56f3c30a156abd91eaf2c144f467486b31d4bff"} Oct 14 13:13:00.222500 master-1 kubenswrapper[4740]: I1014 13:13:00.222384 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-5-master-1" podStartSLOduration=3.222351835 podStartE2EDuration="3.222351835s" podCreationTimestamp="2025-10-14 13:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:13:00.216139477 +0000 UTC m=+406.026428846" watchObservedRunningTime="2025-10-14 13:13:00.222351835 +0000 UTC m=+406.032641214" Oct 14 13:13:00.771487 master-1 kubenswrapper[4740]: I1014 13:13:00.771411 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:00.771487 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:00.771487 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:00.771487 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:00.771925 master-1 kubenswrapper[4740]: I1014 13:13:00.771506 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:01.771893 master-1 kubenswrapper[4740]: I1014 13:13:01.771770 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:01.771893 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:01.771893 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:01.771893 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:01.771893 master-1 kubenswrapper[4740]: I1014 13:13:01.771882 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:01.944483 master-1 kubenswrapper[4740]: I1014 13:13:01.944402 4740 scope.go:117] "RemoveContainer" containerID="4642cf87216d34a41602fbb9cf593d0d329fd43c67ed7b264d9a3b2b3022daaf" Oct 14 13:13:02.216845 master-1 kubenswrapper[4740]: I1014 13:13:02.216772 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/1.log" Oct 14 13:13:02.217461 master-1 kubenswrapper[4740]: I1014 13:13:02.217394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerStarted","Data":"03ca19c1b466ba0fcc071d9bfb4a5ed1c705eab7bdb06858b96afeb5d268130b"} Oct 14 13:13:02.770537 master-1 kubenswrapper[4740]: I1014 13:13:02.770459 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:02.770537 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:02.770537 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:02.770537 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:02.770966 master-1 kubenswrapper[4740]: I1014 13:13:02.770543 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: I1014 13:13:03.036215 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:13:03.036369 master-1 kubenswrapper[4740]: I1014 13:13:03.036356 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:03.770882 master-1 kubenswrapper[4740]: I1014 13:13:03.770767 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:03.770882 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:03.770882 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:03.770882 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:03.770882 master-1 kubenswrapper[4740]: I1014 13:13:03.770859 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:04.772057 master-1 kubenswrapper[4740]: I1014 13:13:04.771956 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:04.772057 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:04.772057 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:04.772057 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:04.773487 master-1 kubenswrapper[4740]: I1014 13:13:04.772118 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:05.772111 master-1 kubenswrapper[4740]: I1014 13:13:05.771924 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:05.772111 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:05.772111 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:05.772111 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:05.774726 master-1 kubenswrapper[4740]: I1014 13:13:05.772125 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:06.248778 master-1 kubenswrapper[4740]: I1014 13:13:06.248703 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-3-master-1_10c5ffc3-1817-4a99-9e46-a205827a136d/installer/0.log" Oct 14 13:13:06.248778 master-1 kubenswrapper[4740]: I1014 13:13:06.248774 4740 generic.go:334] "Generic (PLEG): container finished" podID="10c5ffc3-1817-4a99-9e46-a205827a136d" containerID="663fc829394d2f5a3ee391939cefd98acd80028126df6598ef23664bfcff9269" exitCode=1 Oct 14 13:13:06.249160 master-1 kubenswrapper[4740]: I1014 13:13:06.248815 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-1" event={"ID":"10c5ffc3-1817-4a99-9e46-a205827a136d","Type":"ContainerDied","Data":"663fc829394d2f5a3ee391939cefd98acd80028126df6598ef23664bfcff9269"} Oct 14 13:13:06.384562 master-1 kubenswrapper[4740]: I1014 13:13:06.384498 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-3-master-1_10c5ffc3-1817-4a99-9e46-a205827a136d/installer/0.log" Oct 14 13:13:06.384737 master-1 kubenswrapper[4740]: I1014 13:13:06.384613 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-1" Oct 14 13:13:06.419859 master-1 kubenswrapper[4740]: I1014 13:13:06.419738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c5ffc3-1817-4a99-9e46-a205827a136d-kube-api-access\") pod \"10c5ffc3-1817-4a99-9e46-a205827a136d\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " Oct 14 13:13:06.419859 master-1 kubenswrapper[4740]: I1014 13:13:06.419826 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-var-lock\") pod \"10c5ffc3-1817-4a99-9e46-a205827a136d\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " Oct 14 13:13:06.420401 master-1 kubenswrapper[4740]: I1014 13:13:06.419901 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-kubelet-dir\") pod \"10c5ffc3-1817-4a99-9e46-a205827a136d\" (UID: \"10c5ffc3-1817-4a99-9e46-a205827a136d\") " Oct 14 13:13:06.420401 master-1 kubenswrapper[4740]: I1014 13:13:06.420348 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "10c5ffc3-1817-4a99-9e46-a205827a136d" (UID: "10c5ffc3-1817-4a99-9e46-a205827a136d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:13:06.422550 master-1 kubenswrapper[4740]: I1014 13:13:06.422480 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-var-lock" (OuterVolumeSpecName: "var-lock") pod "10c5ffc3-1817-4a99-9e46-a205827a136d" (UID: "10c5ffc3-1817-4a99-9e46-a205827a136d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:13:06.425539 master-1 kubenswrapper[4740]: I1014 13:13:06.425469 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c5ffc3-1817-4a99-9e46-a205827a136d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "10c5ffc3-1817-4a99-9e46-a205827a136d" (UID: "10c5ffc3-1817-4a99-9e46-a205827a136d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:13:06.522130 master-1 kubenswrapper[4740]: I1014 13:13:06.522062 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10c5ffc3-1817-4a99-9e46-a205827a136d-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:06.522130 master-1 kubenswrapper[4740]: I1014 13:13:06.522104 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:06.522130 master-1 kubenswrapper[4740]: I1014 13:13:06.522118 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10c5ffc3-1817-4a99-9e46-a205827a136d-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:06.771376 master-1 kubenswrapper[4740]: I1014 13:13:06.771301 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:06.771376 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:06.771376 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:06.771376 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:06.771867 master-1 kubenswrapper[4740]: I1014 13:13:06.771386 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:07.259220 master-1 kubenswrapper[4740]: I1014 13:13:07.259143 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-3-master-1_10c5ffc3-1817-4a99-9e46-a205827a136d/installer/0.log" Oct 14 13:13:07.260159 master-1 kubenswrapper[4740]: I1014 13:13:07.259286 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-3-master-1" event={"ID":"10c5ffc3-1817-4a99-9e46-a205827a136d","Type":"ContainerDied","Data":"e689e3649ba4e5dc0b6cf33d2f4be69aa218ee8b24a424ec681ac5cb02e9557e"} Oct 14 13:13:07.260159 master-1 kubenswrapper[4740]: I1014 13:13:07.259351 4740 scope.go:117] "RemoveContainer" containerID="663fc829394d2f5a3ee391939cefd98acd80028126df6598ef23664bfcff9269" Oct 14 13:13:07.260159 master-1 kubenswrapper[4740]: I1014 13:13:07.259393 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-3-master-1" Oct 14 13:13:07.302328 master-1 kubenswrapper[4740]: I1014 13:13:07.302258 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-3-master-1"] Oct 14 13:13:07.309939 master-1 kubenswrapper[4740]: I1014 13:13:07.309850 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-3-master-1"] Oct 14 13:13:07.771534 master-1 kubenswrapper[4740]: I1014 13:13:07.771460 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:07.771534 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:07.771534 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:07.771534 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:07.771973 master-1 kubenswrapper[4740]: I1014 13:13:07.771567 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: I1014 13:13:08.031203 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:13:08.031428 master-1 kubenswrapper[4740]: I1014 13:13:08.031349 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:08.771322 master-1 kubenswrapper[4740]: I1014 13:13:08.771259 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:08.771322 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:08.771322 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:08.771322 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:08.771886 master-1 kubenswrapper[4740]: I1014 13:13:08.771340 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:08.960154 master-1 kubenswrapper[4740]: I1014 13:13:08.960034 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c5ffc3-1817-4a99-9e46-a205827a136d" path="/var/lib/kubelet/pods/10c5ffc3-1817-4a99-9e46-a205827a136d/volumes" Oct 14 13:13:09.772068 master-1 kubenswrapper[4740]: I1014 13:13:09.771967 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:09.772068 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:09.772068 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:09.772068 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:09.772068 master-1 kubenswrapper[4740]: I1014 13:13:09.772061 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:10.076046 master-1 kubenswrapper[4740]: I1014 13:13:10.075848 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-mzrkb_ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67/assisted-installer-controller/0.log" Oct 14 13:13:10.104184 master-1 kubenswrapper[4740]: I1014 13:13:10.104101 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:13:10.771927 master-1 kubenswrapper[4740]: I1014 13:13:10.771810 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:10.771927 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:10.771927 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:10.771927 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:10.773031 master-1 kubenswrapper[4740]: I1014 13:13:10.771945 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:11.772175 master-1 kubenswrapper[4740]: I1014 13:13:11.772080 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:11.772175 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:11.772175 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:11.772175 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:11.773256 master-1 kubenswrapper[4740]: I1014 13:13:11.772178 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:12.770856 master-1 kubenswrapper[4740]: I1014 13:13:12.770727 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:12.770856 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:12.770856 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:12.770856 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:12.770856 master-1 kubenswrapper[4740]: I1014 13:13:12.770798 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: I1014 13:13:13.033206 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:13:13.033493 master-1 kubenswrapper[4740]: I1014 13:13:13.033350 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:13.771441 master-1 kubenswrapper[4740]: I1014 13:13:13.771346 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:13.771441 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:13.771441 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:13.771441 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:13.771878 master-1 kubenswrapper[4740]: I1014 13:13:13.771449 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:14.771277 master-1 kubenswrapper[4740]: I1014 13:13:14.771152 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:14.771277 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:14.771277 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:14.771277 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:14.772405 master-1 kubenswrapper[4740]: I1014 13:13:14.771306 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:15.770897 master-1 kubenswrapper[4740]: I1014 13:13:15.770754 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:15.770897 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:15.770897 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:15.770897 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:15.770897 master-1 kubenswrapper[4740]: I1014 13:13:15.770881 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:16.772113 master-1 kubenswrapper[4740]: I1014 13:13:16.772019 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:16.772113 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:16.772113 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:16.772113 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:16.773142 master-1 kubenswrapper[4740]: I1014 13:13:16.772122 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:16.985443 master-1 kubenswrapper[4740]: I1014 13:13:16.985326 4740 scope.go:117] "RemoveContainer" containerID="3766442c27bb97fdb3172d5d35ef57eed36dc9e7696554f7a70c82794900b102" Oct 14 13:13:17.771408 master-1 kubenswrapper[4740]: I1014 13:13:17.771319 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:17.771408 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:17.771408 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:17.771408 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:17.771860 master-1 kubenswrapper[4740]: I1014 13:13:17.771422 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: I1014 13:13:18.034131 4740 patch_prober.go:28] interesting pod/apiserver-96c4c446c-brl6n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:13:18.034359 master-1 kubenswrapper[4740]: I1014 13:13:18.034224 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:18.771319 master-1 kubenswrapper[4740]: I1014 13:13:18.771205 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:18.771319 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:18.771319 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:18.771319 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:18.771319 master-1 kubenswrapper[4740]: I1014 13:13:18.771315 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:19.353497 master-1 kubenswrapper[4740]: I1014 13:13:19.353425 4740 generic.go:334] "Generic (PLEG): container finished" podID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerID="c5cd1b05a00ba84888e4a60b94053728d4fbb75e95c5e2e3f17dac5202720621" exitCode=0 Oct 14 13:13:19.353497 master-1 kubenswrapper[4740]: I1014 13:13:19.353477 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" event={"ID":"ebfb9d2f-6716-4abe-b781-0d9632f00498","Type":"ContainerDied","Data":"c5cd1b05a00ba84888e4a60b94053728d4fbb75e95c5e2e3f17dac5202720621"} Oct 14 13:13:19.771917 master-1 kubenswrapper[4740]: I1014 13:13:19.771833 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:19.771917 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:19.771917 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:19.771917 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:19.772272 master-1 kubenswrapper[4740]: I1014 13:13:19.771943 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:19.810309 master-1 kubenswrapper[4740]: I1014 13:13:19.810199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:13:19.862018 master-1 kubenswrapper[4740]: I1014 13:13:19.861932 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-s9576"] Oct 14 13:13:19.862342 master-1 kubenswrapper[4740]: E1014 13:13:19.862300 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="fix-audit-permissions" Oct 14 13:13:19.862342 master-1 kubenswrapper[4740]: I1014 13:13:19.862332 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="fix-audit-permissions" Oct 14 13:13:19.862480 master-1 kubenswrapper[4740]: E1014 13:13:19.862355 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c5ffc3-1817-4a99-9e46-a205827a136d" containerName="installer" Oct 14 13:13:19.862480 master-1 kubenswrapper[4740]: I1014 13:13:19.862369 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c5ffc3-1817-4a99-9e46-a205827a136d" containerName="installer" Oct 14 13:13:19.862480 master-1 kubenswrapper[4740]: E1014 13:13:19.862399 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" Oct 14 13:13:19.862480 master-1 kubenswrapper[4740]: I1014 13:13:19.862413 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" Oct 14 13:13:19.862707 master-1 kubenswrapper[4740]: I1014 13:13:19.862574 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c5ffc3-1817-4a99-9e46-a205827a136d" containerName="installer" Oct 14 13:13:19.862707 master-1 kubenswrapper[4740]: I1014 13:13:19.862610 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" containerName="oauth-apiserver" Oct 14 13:13:19.863852 master-1 kubenswrapper[4740]: I1014 13:13:19.863795 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:19.868847 master-1 kubenswrapper[4740]: I1014 13:13:19.868779 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-8gpjk" Oct 14 13:13:19.883345 master-1 kubenswrapper[4740]: I1014 13:13:19.882844 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-s9576"] Oct 14 13:13:19.932031 master-1 kubenswrapper[4740]: I1014 13:13:19.931941 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-trusted-ca-bundle\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932031 master-1 kubenswrapper[4740]: I1014 13:13:19.932017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-serving-ca\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932366 master-1 kubenswrapper[4740]: I1014 13:13:19.932048 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn88j\" (UniqueName: \"kubernetes.io/projected/ebfb9d2f-6716-4abe-b781-0d9632f00498-kube-api-access-sn88j\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932366 master-1 kubenswrapper[4740]: I1014 13:13:19.932097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-client\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932366 master-1 kubenswrapper[4740]: I1014 13:13:19.932149 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-encryption-config\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932366 master-1 kubenswrapper[4740]: I1014 13:13:19.932193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-dir\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932366 master-1 kubenswrapper[4740]: I1014 13:13:19.932252 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-policies\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.932366 master-1 kubenswrapper[4740]: I1014 13:13:19.932318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-serving-cert\") pod \"ebfb9d2f-6716-4abe-b781-0d9632f00498\" (UID: \"ebfb9d2f-6716-4abe-b781-0d9632f00498\") " Oct 14 13:13:19.933069 master-1 kubenswrapper[4740]: I1014 13:13:19.933029 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:13:19.933838 master-1 kubenswrapper[4740]: I1014 13:13:19.933687 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:13:19.933928 master-1 kubenswrapper[4740]: I1014 13:13:19.933826 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:13:19.934201 master-1 kubenswrapper[4740]: I1014 13:13:19.934126 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:13:19.937641 master-1 kubenswrapper[4740]: I1014 13:13:19.937585 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:13:19.938312 master-1 kubenswrapper[4740]: I1014 13:13:19.938259 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebfb9d2f-6716-4abe-b781-0d9632f00498-kube-api-access-sn88j" (OuterVolumeSpecName: "kube-api-access-sn88j") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "kube-api-access-sn88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:13:19.938312 master-1 kubenswrapper[4740]: I1014 13:13:19.938295 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:13:19.938451 master-1 kubenswrapper[4740]: I1014 13:13:19.938396 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ebfb9d2f-6716-4abe-b781-0d9632f00498" (UID: "ebfb9d2f-6716-4abe-b781-0d9632f00498"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:13:20.034054 master-1 kubenswrapper[4740]: I1014 13:13:20.033861 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-client\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.034054 master-1 kubenswrapper[4740]: I1014 13:13:20.033947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-trusted-ca-bundle\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.034054 master-1 kubenswrapper[4740]: I1014 13:13:20.033992 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-serving-cert\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.034054 master-1 kubenswrapper[4740]: I1014 13:13:20.034035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-dir\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.034592 master-1 kubenswrapper[4740]: I1014 13:13:20.034074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-encryption-config\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.034592 master-1 kubenswrapper[4740]: I1014 13:13:20.034449 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-policies\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.034717 master-1 kubenswrapper[4740]: I1014 13:13:20.034614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px7x9\" (UniqueName: \"kubernetes.io/projected/6492175e-e529-4b83-a4f0-45c7a30f7a86-kube-api-access-px7x9\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.035259 master-1 kubenswrapper[4740]: I1014 13:13:20.035179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-serving-ca\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.035782 master-1 kubenswrapper[4740]: I1014 13:13:20.035708 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.035956 master-1 kubenswrapper[4740]: I1014 13:13:20.035916 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn88j\" (UniqueName: \"kubernetes.io/projected/ebfb9d2f-6716-4abe-b781-0d9632f00498-kube-api-access-sn88j\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.035956 master-1 kubenswrapper[4740]: I1014 13:13:20.035949 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.036100 master-1 kubenswrapper[4740]: I1014 13:13:20.035972 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.036100 master-1 kubenswrapper[4740]: I1014 13:13:20.035992 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.036100 master-1 kubenswrapper[4740]: I1014 13:13:20.036013 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.036100 master-1 kubenswrapper[4740]: I1014 13:13:20.036032 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ebfb9d2f-6716-4abe-b781-0d9632f00498-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.036100 master-1 kubenswrapper[4740]: I1014 13:13:20.036052 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb9d2f-6716-4abe-b781-0d9632f00498-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:20.138181 master-1 kubenswrapper[4740]: I1014 13:13:20.138076 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-policies\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.138500 master-1 kubenswrapper[4740]: I1014 13:13:20.138330 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px7x9\" (UniqueName: \"kubernetes.io/projected/6492175e-e529-4b83-a4f0-45c7a30f7a86-kube-api-access-px7x9\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.138500 master-1 kubenswrapper[4740]: I1014 13:13:20.138467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-serving-ca\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.138744 master-1 kubenswrapper[4740]: I1014 13:13:20.138684 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-client\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.138834 master-1 kubenswrapper[4740]: I1014 13:13:20.138768 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-trusted-ca-bundle\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.138918 master-1 kubenswrapper[4740]: I1014 13:13:20.138825 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-serving-cert\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.138918 master-1 kubenswrapper[4740]: I1014 13:13:20.138883 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-dir\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.139051 master-1 kubenswrapper[4740]: I1014 13:13:20.138944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-encryption-config\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.139925 master-1 kubenswrapper[4740]: I1014 13:13:20.139377 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-policies\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.139925 master-1 kubenswrapper[4740]: I1014 13:13:20.139527 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-dir\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.140191 master-1 kubenswrapper[4740]: I1014 13:13:20.140117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-trusted-ca-bundle\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.140307 master-1 kubenswrapper[4740]: I1014 13:13:20.140272 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-serving-ca\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.143791 master-1 kubenswrapper[4740]: I1014 13:13:20.143736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-serving-cert\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.144275 master-1 kubenswrapper[4740]: I1014 13:13:20.144210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-client\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.144796 master-1 kubenswrapper[4740]: I1014 13:13:20.144748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-encryption-config\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.173388 master-1 kubenswrapper[4740]: I1014 13:13:20.172310 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px7x9\" (UniqueName: \"kubernetes.io/projected/6492175e-e529-4b83-a4f0-45c7a30f7a86-kube-api-access-px7x9\") pod \"apiserver-7b6784d654-s9576\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.205362 master-1 kubenswrapper[4740]: I1014 13:13:20.205289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:20.371570 master-1 kubenswrapper[4740]: I1014 13:13:20.371499 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" event={"ID":"ebfb9d2f-6716-4abe-b781-0d9632f00498","Type":"ContainerDied","Data":"eee0ee6b25d6d7e91442bd6108b3db1c9b1e388a31d368ca7c194a15ba4cdb5f"} Oct 14 13:13:20.372103 master-1 kubenswrapper[4740]: I1014 13:13:20.371582 4740 scope.go:117] "RemoveContainer" containerID="c5cd1b05a00ba84888e4a60b94053728d4fbb75e95c5e2e3f17dac5202720621" Oct 14 13:13:20.372103 master-1 kubenswrapper[4740]: I1014 13:13:20.371604 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-96c4c446c-brl6n" Oct 14 13:13:20.418417 master-1 kubenswrapper[4740]: I1014 13:13:20.418365 4740 scope.go:117] "RemoveContainer" containerID="68ebc7959133a6009d0461f663d3d8332f3db7cc21e6013363b08f4d56e8d065" Oct 14 13:13:20.433749 master-1 kubenswrapper[4740]: I1014 13:13:20.432931 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-96c4c446c-brl6n"] Oct 14 13:13:20.440589 master-1 kubenswrapper[4740]: I1014 13:13:20.440529 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-96c4c446c-brl6n"] Oct 14 13:13:20.704334 master-1 kubenswrapper[4740]: I1014 13:13:20.704262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-s9576"] Oct 14 13:13:20.709086 master-1 kubenswrapper[4740]: W1014 13:13:20.708991 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6492175e_e529_4b83_a4f0_45c7a30f7a86.slice/crio-116681c06662a5af31c4acc21e9356b554a14ae7ef5a59262361b356e94a29dc WatchSource:0}: Error finding container 116681c06662a5af31c4acc21e9356b554a14ae7ef5a59262361b356e94a29dc: Status 404 returned error can't find the container with id 116681c06662a5af31c4acc21e9356b554a14ae7ef5a59262361b356e94a29dc Oct 14 13:13:20.771066 master-1 kubenswrapper[4740]: I1014 13:13:20.770976 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:20.771066 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:20.771066 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:20.771066 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:20.771414 master-1 kubenswrapper[4740]: I1014 13:13:20.771090 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:20.960320 master-1 kubenswrapper[4740]: I1014 13:13:20.959799 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebfb9d2f-6716-4abe-b781-0d9632f00498" path="/var/lib/kubelet/pods/ebfb9d2f-6716-4abe-b781-0d9632f00498/volumes" Oct 14 13:13:21.386793 master-1 kubenswrapper[4740]: I1014 13:13:21.386691 4740 generic.go:334] "Generic (PLEG): container finished" podID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerID="46f87f50cd13f9281fb5bdb324b3969bf2687cbf6d1e1e8e755a253c6f2d276c" exitCode=0 Oct 14 13:13:21.386793 master-1 kubenswrapper[4740]: I1014 13:13:21.386761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" event={"ID":"6492175e-e529-4b83-a4f0-45c7a30f7a86","Type":"ContainerDied","Data":"46f87f50cd13f9281fb5bdb324b3969bf2687cbf6d1e1e8e755a253c6f2d276c"} Oct 14 13:13:21.387829 master-1 kubenswrapper[4740]: I1014 13:13:21.386811 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" event={"ID":"6492175e-e529-4b83-a4f0-45c7a30f7a86","Type":"ContainerStarted","Data":"116681c06662a5af31c4acc21e9356b554a14ae7ef5a59262361b356e94a29dc"} Oct 14 13:13:21.772379 master-1 kubenswrapper[4740]: I1014 13:13:21.772268 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:21.772379 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:21.772379 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:21.772379 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:21.772730 master-1 kubenswrapper[4740]: I1014 13:13:21.772381 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:22.399452 master-1 kubenswrapper[4740]: I1014 13:13:22.399359 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" event={"ID":"6492175e-e529-4b83-a4f0-45c7a30f7a86","Type":"ContainerStarted","Data":"a239b7f63812583aa918ecca92d78715042d5630c3b5d976852ccf0f81559882"} Oct 14 13:13:22.425340 master-1 kubenswrapper[4740]: I1014 13:13:22.425122 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podStartSLOduration=54.425087787 podStartE2EDuration="54.425087787s" podCreationTimestamp="2025-10-14 13:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:13:22.424481783 +0000 UTC m=+428.234771182" watchObservedRunningTime="2025-10-14 13:13:22.425087787 +0000 UTC m=+428.235377156" Oct 14 13:13:22.770437 master-1 kubenswrapper[4740]: I1014 13:13:22.770363 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:22.770437 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:22.770437 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:22.770437 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:22.770704 master-1 kubenswrapper[4740]: I1014 13:13:22.770452 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:23.772628 master-1 kubenswrapper[4740]: I1014 13:13:23.772559 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:23.772628 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:23.772628 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:23.772628 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:23.773356 master-1 kubenswrapper[4740]: I1014 13:13:23.772649 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:24.772257 master-1 kubenswrapper[4740]: I1014 13:13:24.772135 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:24.772257 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:24.772257 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:24.772257 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:24.773445 master-1 kubenswrapper[4740]: I1014 13:13:24.772296 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:25.206718 master-1 kubenswrapper[4740]: I1014 13:13:25.206504 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:25.206718 master-1 kubenswrapper[4740]: I1014 13:13:25.206577 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:25.216624 master-1 kubenswrapper[4740]: I1014 13:13:25.216571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:25.427463 master-1 kubenswrapper[4740]: I1014 13:13:25.427402 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:13:25.772086 master-1 kubenswrapper[4740]: I1014 13:13:25.772017 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:25.772086 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:25.772086 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:25.772086 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:25.772672 master-1 kubenswrapper[4740]: I1014 13:13:25.772627 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:26.772060 master-1 kubenswrapper[4740]: I1014 13:13:26.771998 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:26.772060 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:26.772060 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:26.772060 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:26.773184 master-1 kubenswrapper[4740]: I1014 13:13:26.772076 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:27.771101 master-1 kubenswrapper[4740]: I1014 13:13:27.771031 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:27.771101 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:27.771101 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:27.771101 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:27.771480 master-1 kubenswrapper[4740]: I1014 13:13:27.771114 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:28.771502 master-1 kubenswrapper[4740]: I1014 13:13:28.771409 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:28.771502 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:28.771502 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:28.771502 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:28.772689 master-1 kubenswrapper[4740]: I1014 13:13:28.771523 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:29.771740 master-1 kubenswrapper[4740]: I1014 13:13:29.771630 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:29.771740 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:29.771740 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:29.771740 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:29.772714 master-1 kubenswrapper[4740]: I1014 13:13:29.771739 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:30.771318 master-1 kubenswrapper[4740]: I1014 13:13:30.771269 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:30.771318 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:30.771318 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:30.771318 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:30.771796 master-1 kubenswrapper[4740]: I1014 13:13:30.771765 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:31.771558 master-1 kubenswrapper[4740]: I1014 13:13:31.771480 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:31.771558 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:31.771558 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:31.771558 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:31.772496 master-1 kubenswrapper[4740]: I1014 13:13:31.771587 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:32.771381 master-1 kubenswrapper[4740]: I1014 13:13:32.771271 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:32.771381 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:32.771381 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:32.771381 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:32.771381 master-1 kubenswrapper[4740]: I1014 13:13:32.771367 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:33.771823 master-1 kubenswrapper[4740]: I1014 13:13:33.771700 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:33.771823 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:33.771823 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:33.771823 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:33.771823 master-1 kubenswrapper[4740]: I1014 13:13:33.771812 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:34.771594 master-1 kubenswrapper[4740]: I1014 13:13:34.771483 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:34.771594 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:34.771594 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:34.771594 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:34.771594 master-1 kubenswrapper[4740]: I1014 13:13:34.771587 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:35.770682 master-1 kubenswrapper[4740]: I1014 13:13:35.770592 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:35.770682 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:35.770682 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:35.770682 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:35.770682 master-1 kubenswrapper[4740]: I1014 13:13:35.770685 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:36.771189 master-1 kubenswrapper[4740]: I1014 13:13:36.771082 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:36.771189 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:36.771189 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:36.771189 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:36.771189 master-1 kubenswrapper[4740]: I1014 13:13:36.771177 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:37.771472 master-1 kubenswrapper[4740]: I1014 13:13:37.771356 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:37.771472 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:37.771472 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:37.771472 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:37.772605 master-1 kubenswrapper[4740]: I1014 13:13:37.771478 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:38.771329 master-1 kubenswrapper[4740]: I1014 13:13:38.771208 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:38.771329 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:38.771329 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:38.771329 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:38.772155 master-1 kubenswrapper[4740]: I1014 13:13:38.771332 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:39.771969 master-1 kubenswrapper[4740]: I1014 13:13:39.771876 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:39.771969 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:39.771969 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:39.771969 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:39.772899 master-1 kubenswrapper[4740]: I1014 13:13:39.771982 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:40.109763 master-1 kubenswrapper[4740]: I1014 13:13:40.109633 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:13:40.327561 master-1 kubenswrapper[4740]: I1014 13:13:40.327460 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:13:40.328088 master-1 kubenswrapper[4740]: I1014 13:13:40.328036 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" containerID="cri-o://3ad429dc9dd11eddee5b1383ef737b192bca643be4a667ff5b676aae5c21bf7d" gracePeriod=30 Oct 14 13:13:40.328427 master-1 kubenswrapper[4740]: I1014 13:13:40.328198 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" containerID="cri-o://1c4127aa23a2bb47bd11f50f568887ce25310b4602ae0f737db8b726668165fe" gracePeriod=30 Oct 14 13:13:40.328427 master-1 kubenswrapper[4740]: I1014 13:13:40.328188 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" containerID="cri-o://3388480363fa320a1eccd274b5a9a4cec5eac07b78889513af824dc57bd9ba88" gracePeriod=30 Oct 14 13:13:40.328427 master-1 kubenswrapper[4740]: I1014 13:13:40.328304 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" containerID="cri-o://7e25ecf4c26d3750937766f75c49f56c564cc6efd9d78ab9478ae6db4d0034e2" gracePeriod=30 Oct 14 13:13:40.328723 master-1 kubenswrapper[4740]: I1014 13:13:40.328625 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" containerID="cri-o://d0363272beb3e45e2b47c573ece4971be57a43ed3f3c8423ae048538797b69c8" gracePeriod=30 Oct 14 13:13:40.330872 master-1 kubenswrapper[4740]: I1014 13:13:40.330804 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:13:40.331221 master-1 kubenswrapper[4740]: E1014 13:13:40.331174 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" Oct 14 13:13:40.331221 master-1 kubenswrapper[4740]: I1014 13:13:40.331212 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: E1014 13:13:40.331288 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: I1014 13:13:40.331307 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: E1014 13:13:40.331326 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: I1014 13:13:40.331341 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: E1014 13:13:40.331358 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: I1014 13:13:40.331371 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: E1014 13:13:40.331389 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="setup" Oct 14 13:13:40.331406 master-1 kubenswrapper[4740]: I1014 13:13:40.331403 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="setup" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: E1014 13:13:40.331424 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331437 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: E1014 13:13:40.331457 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331473 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: E1014 13:13:40.331492 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-ensure-env-vars" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331505 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-ensure-env-vars" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: E1014 13:13:40.331524 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331537 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: E1014 13:13:40.331557 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-resources-copy" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-resources-copy" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331764 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-readyz" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331784 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331802 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-rev" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331819 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcdctl" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331834 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd-metrics" Oct 14 13:13:40.331890 master-1 kubenswrapper[4740]: I1014 13:13:40.331850 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.332868 master-1 kubenswrapper[4740]: I1014 13:13:40.332182 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerName="etcd" Oct 14 13:13:40.436160 master-1 kubenswrapper[4740]: I1014 13:13:40.436001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.436160 master-1 kubenswrapper[4740]: I1014 13:13:40.436171 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.436625 master-1 kubenswrapper[4740]: I1014 13:13:40.436210 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.436625 master-1 kubenswrapper[4740]: I1014 13:13:40.436295 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.436625 master-1 kubenswrapper[4740]: I1014 13:13:40.436390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.436625 master-1 kubenswrapper[4740]: I1014 13:13:40.436568 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538049 master-1 kubenswrapper[4740]: I1014 13:13:40.537926 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538402 master-1 kubenswrapper[4740]: I1014 13:13:40.538118 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538402 master-1 kubenswrapper[4740]: I1014 13:13:40.538184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538402 master-1 kubenswrapper[4740]: I1014 13:13:40.538269 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538402 master-1 kubenswrapper[4740]: I1014 13:13:40.538310 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538402 master-1 kubenswrapper[4740]: I1014 13:13:40.538392 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538402 master-1 kubenswrapper[4740]: I1014 13:13:40.538325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538879 master-1 kubenswrapper[4740]: I1014 13:13:40.538411 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538879 master-1 kubenswrapper[4740]: I1014 13:13:40.538449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538879 master-1 kubenswrapper[4740]: I1014 13:13:40.538447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538879 master-1 kubenswrapper[4740]: I1014 13:13:40.538467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.538879 master-1 kubenswrapper[4740]: I1014 13:13:40.538528 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"etcd-master-1\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:13:40.543405 master-1 kubenswrapper[4740]: I1014 13:13:40.543347 4740 generic.go:334] "Generic (PLEG): container finished" podID="6927f794-9b47-4a35-b412-78b7d24f7622" containerID="2b8339850f796f4cefb3b4fee56f3c30a156abd91eaf2c144f467486b31d4bff" exitCode=0 Oct 14 13:13:40.543538 master-1 kubenswrapper[4740]: I1014 13:13:40.543458 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"6927f794-9b47-4a35-b412-78b7d24f7622","Type":"ContainerDied","Data":"2b8339850f796f4cefb3b4fee56f3c30a156abd91eaf2c144f467486b31d4bff"} Oct 14 13:13:40.546648 master-1 kubenswrapper[4740]: I1014 13:13:40.546603 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 14 13:13:40.547254 master-1 kubenswrapper[4740]: I1014 13:13:40.547179 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 14 13:13:40.548786 master-1 kubenswrapper[4740]: I1014 13:13:40.548737 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 14 13:13:40.550284 master-1 kubenswrapper[4740]: I1014 13:13:40.550193 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="7e25ecf4c26d3750937766f75c49f56c564cc6efd9d78ab9478ae6db4d0034e2" exitCode=2 Oct 14 13:13:40.550284 master-1 kubenswrapper[4740]: I1014 13:13:40.550245 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="3388480363fa320a1eccd274b5a9a4cec5eac07b78889513af824dc57bd9ba88" exitCode=0 Oct 14 13:13:40.550284 master-1 kubenswrapper[4740]: I1014 13:13:40.550256 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="1c4127aa23a2bb47bd11f50f568887ce25310b4602ae0f737db8b726668165fe" exitCode=2 Oct 14 13:13:40.576406 master-1 kubenswrapper[4740]: I1014 13:13:40.576288 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 14 13:13:40.771050 master-1 kubenswrapper[4740]: I1014 13:13:40.770977 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:40.771050 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:40.771050 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:40.771050 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:40.771410 master-1 kubenswrapper[4740]: I1014 13:13:40.771081 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:41.771516 master-1 kubenswrapper[4740]: I1014 13:13:41.771451 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:41.771516 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:41.771516 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:41.771516 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:41.772502 master-1 kubenswrapper[4740]: I1014 13:13:41.771520 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:41.951479 master-1 kubenswrapper[4740]: I1014 13:13:41.951398 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 14 13:13:42.060256 master-1 kubenswrapper[4740]: I1014 13:13:42.060174 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-var-lock\") pod \"6927f794-9b47-4a35-b412-78b7d24f7622\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " Oct 14 13:13:42.060579 master-1 kubenswrapper[4740]: I1014 13:13:42.060506 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-var-lock" (OuterVolumeSpecName: "var-lock") pod "6927f794-9b47-4a35-b412-78b7d24f7622" (UID: "6927f794-9b47-4a35-b412-78b7d24f7622"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:13:42.060711 master-1 kubenswrapper[4740]: I1014 13:13:42.060696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-kubelet-dir\") pod \"6927f794-9b47-4a35-b412-78b7d24f7622\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " Oct 14 13:13:42.060788 master-1 kubenswrapper[4740]: I1014 13:13:42.060753 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6927f794-9b47-4a35-b412-78b7d24f7622" (UID: "6927f794-9b47-4a35-b412-78b7d24f7622"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:13:42.060956 master-1 kubenswrapper[4740]: I1014 13:13:42.060942 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6927f794-9b47-4a35-b412-78b7d24f7622-kube-api-access\") pod \"6927f794-9b47-4a35-b412-78b7d24f7622\" (UID: \"6927f794-9b47-4a35-b412-78b7d24f7622\") " Oct 14 13:13:42.061370 master-1 kubenswrapper[4740]: I1014 13:13:42.061355 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:42.061435 master-1 kubenswrapper[4740]: I1014 13:13:42.061424 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6927f794-9b47-4a35-b412-78b7d24f7622-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:42.067573 master-1 kubenswrapper[4740]: I1014 13:13:42.067495 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6927f794-9b47-4a35-b412-78b7d24f7622-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6927f794-9b47-4a35-b412-78b7d24f7622" (UID: "6927f794-9b47-4a35-b412-78b7d24f7622"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:13:42.162706 master-1 kubenswrapper[4740]: I1014 13:13:42.162482 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6927f794-9b47-4a35-b412-78b7d24f7622-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:13:42.570202 master-1 kubenswrapper[4740]: I1014 13:13:42.570137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-5-master-1" event={"ID":"6927f794-9b47-4a35-b412-78b7d24f7622","Type":"ContainerDied","Data":"b5038ca3ac631f1730a63909f85795ba3c9a6f687bb25a0eb0d359b36f9a7853"} Oct 14 13:13:42.570202 master-1 kubenswrapper[4740]: I1014 13:13:42.570193 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5038ca3ac631f1730a63909f85795ba3c9a6f687bb25a0eb0d359b36f9a7853" Oct 14 13:13:42.570895 master-1 kubenswrapper[4740]: I1014 13:13:42.570285 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-5-master-1" Oct 14 13:13:42.772568 master-1 kubenswrapper[4740]: I1014 13:13:42.772453 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:42.772568 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:42.772568 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:42.772568 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:42.772568 master-1 kubenswrapper[4740]: I1014 13:13:42.772563 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:43.771279 master-1 kubenswrapper[4740]: I1014 13:13:43.771124 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:43.771279 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:43.771279 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:43.771279 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:43.771279 master-1 kubenswrapper[4740]: I1014 13:13:43.771265 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:44.304166 master-1 kubenswrapper[4740]: I1014 13:13:44.304055 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:13:44.304166 master-1 kubenswrapper[4740]: I1014 13:13:44.304146 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:13:44.772379 master-1 kubenswrapper[4740]: I1014 13:13:44.772288 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:44.772379 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:44.772379 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:44.772379 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:44.772845 master-1 kubenswrapper[4740]: I1014 13:13:44.772432 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:45.771347 master-1 kubenswrapper[4740]: I1014 13:13:45.771260 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:45.771347 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:45.771347 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:45.771347 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:45.772332 master-1 kubenswrapper[4740]: I1014 13:13:45.771374 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:46.771896 master-1 kubenswrapper[4740]: I1014 13:13:46.771753 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:46.771896 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:46.771896 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:46.771896 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:46.773000 master-1 kubenswrapper[4740]: I1014 13:13:46.771893 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:47.770958 master-1 kubenswrapper[4740]: I1014 13:13:47.770844 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:47.770958 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:47.770958 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:47.770958 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:47.770958 master-1 kubenswrapper[4740]: I1014 13:13:47.770927 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:48.771331 master-1 kubenswrapper[4740]: I1014 13:13:48.771244 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:48.771331 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:48.771331 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:48.771331 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:48.771331 master-1 kubenswrapper[4740]: I1014 13:13:48.771307 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:49.303426 master-1 kubenswrapper[4740]: I1014 13:13:49.303333 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:13:49.303426 master-1 kubenswrapper[4740]: I1014 13:13:49.303427 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:13:49.771028 master-1 kubenswrapper[4740]: I1014 13:13:49.770965 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:49.771028 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:49.771028 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:49.771028 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:49.772411 master-1 kubenswrapper[4740]: I1014 13:13:49.771058 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:50.772973 master-1 kubenswrapper[4740]: I1014 13:13:50.772875 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:50.772973 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:50.772973 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:50.772973 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:50.774049 master-1 kubenswrapper[4740]: I1014 13:13:50.772998 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:51.772085 master-1 kubenswrapper[4740]: I1014 13:13:51.772008 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:51.772085 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:51.772085 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:51.772085 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:51.772562 master-1 kubenswrapper[4740]: I1014 13:13:51.772102 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:52.771157 master-1 kubenswrapper[4740]: I1014 13:13:52.771045 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:52.771157 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:52.771157 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:52.771157 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:52.772171 master-1 kubenswrapper[4740]: I1014 13:13:52.771156 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:53.771321 master-1 kubenswrapper[4740]: I1014 13:13:53.771192 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:53.771321 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:53.771321 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:53.771321 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:53.771321 master-1 kubenswrapper[4740]: I1014 13:13:53.771308 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:54.303275 master-1 kubenswrapper[4740]: I1014 13:13:54.303144 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:13:54.303275 master-1 kubenswrapper[4740]: I1014 13:13:54.303263 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:13:54.303807 master-1 kubenswrapper[4740]: I1014 13:13:54.303366 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:13:54.304222 master-1 kubenswrapper[4740]: I1014 13:13:54.304143 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:13:54.304311 master-1 kubenswrapper[4740]: I1014 13:13:54.304279 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:13:54.771106 master-1 kubenswrapper[4740]: I1014 13:13:54.770985 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:54.771106 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:54.771106 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:54.771106 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:54.771106 master-1 kubenswrapper[4740]: I1014 13:13:54.771095 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:55.771675 master-1 kubenswrapper[4740]: I1014 13:13:55.771577 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:55.771675 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:55.771675 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:55.771675 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:55.771675 master-1 kubenswrapper[4740]: I1014 13:13:55.771669 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:56.771504 master-1 kubenswrapper[4740]: I1014 13:13:56.771402 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:56.771504 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:56.771504 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:56.771504 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:56.772354 master-1 kubenswrapper[4740]: I1014 13:13:56.771509 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:57.770075 master-1 kubenswrapper[4740]: I1014 13:13:57.769961 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:57.770075 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:57.770075 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:57.770075 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:57.770075 master-1 kubenswrapper[4740]: I1014 13:13:57.770061 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:58.771152 master-1 kubenswrapper[4740]: I1014 13:13:58.771040 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:58.771152 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:58.771152 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:58.771152 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:58.771152 master-1 kubenswrapper[4740]: I1014 13:13:58.771138 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:13:59.304134 master-1 kubenswrapper[4740]: I1014 13:13:59.303980 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:13:59.304134 master-1 kubenswrapper[4740]: I1014 13:13:59.304082 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:13:59.771649 master-1 kubenswrapper[4740]: I1014 13:13:59.771567 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:13:59.771649 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:13:59.771649 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:13:59.771649 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:13:59.771649 master-1 kubenswrapper[4740]: I1014 13:13:59.771635 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:00.771393 master-1 kubenswrapper[4740]: I1014 13:14:00.771303 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:00.771393 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:00.771393 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:00.771393 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:00.771393 master-1 kubenswrapper[4740]: I1014 13:14:00.771384 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:01.771184 master-1 kubenswrapper[4740]: I1014 13:14:01.771068 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:01.771184 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:01.771184 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:01.771184 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:01.772988 master-1 kubenswrapper[4740]: I1014 13:14:01.771186 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:02.771873 master-1 kubenswrapper[4740]: I1014 13:14:02.771751 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:02.771873 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:02.771873 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:02.771873 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:02.771873 master-1 kubenswrapper[4740]: I1014 13:14:02.771853 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:03.771624 master-1 kubenswrapper[4740]: I1014 13:14:03.771527 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:03.771624 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:03.771624 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:03.771624 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:03.771624 master-1 kubenswrapper[4740]: I1014 13:14:03.771603 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:04.304074 master-1 kubenswrapper[4740]: I1014 13:14:04.303990 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:14:04.304932 master-1 kubenswrapper[4740]: I1014 13:14:04.304072 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:14:04.770756 master-1 kubenswrapper[4740]: I1014 13:14:04.770646 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:04.770756 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:04.770756 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:04.770756 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:04.771192 master-1 kubenswrapper[4740]: I1014 13:14:04.770762 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:05.770882 master-1 kubenswrapper[4740]: I1014 13:14:05.770738 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:05.770882 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:05.770882 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:05.770882 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:05.771861 master-1 kubenswrapper[4740]: I1014 13:14:05.770888 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:06.771537 master-1 kubenswrapper[4740]: I1014 13:14:06.771441 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:06.771537 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:06.771537 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:06.771537 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:06.772575 master-1 kubenswrapper[4740]: I1014 13:14:06.771550 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:07.771034 master-1 kubenswrapper[4740]: I1014 13:14:07.770928 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:07.771034 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:07.771034 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:07.771034 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:07.772319 master-1 kubenswrapper[4740]: I1014 13:14:07.771053 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:08.771110 master-1 kubenswrapper[4740]: I1014 13:14:08.771017 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:08.771110 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:08.771110 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:08.771110 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:08.771610 master-1 kubenswrapper[4740]: I1014 13:14:08.771114 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:09.303202 master-1 kubenswrapper[4740]: I1014 13:14:09.303090 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:14:09.303202 master-1 kubenswrapper[4740]: I1014 13:14:09.303167 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:14:09.771538 master-1 kubenswrapper[4740]: I1014 13:14:09.771451 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:09.771538 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:09.771538 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:09.771538 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:09.772525 master-1 kubenswrapper[4740]: I1014 13:14:09.771550 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:10.101956 master-1 kubenswrapper[4740]: I1014 13:14:10.101796 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:14:10.771984 master-1 kubenswrapper[4740]: I1014 13:14:10.771883 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:10.771984 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:10.771984 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:10.771984 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:10.771984 master-1 kubenswrapper[4740]: I1014 13:14:10.771937 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:10.775999 master-1 kubenswrapper[4740]: I1014 13:14:10.775941 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/2.log" Oct 14 13:14:10.776636 master-1 kubenswrapper[4740]: I1014 13:14:10.776572 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/1.log" Oct 14 13:14:10.777056 master-1 kubenswrapper[4740]: I1014 13:14:10.777006 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 14 13:14:10.778200 master-1 kubenswrapper[4740]: I1014 13:14:10.778124 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 14 13:14:10.778827 master-1 kubenswrapper[4740]: I1014 13:14:10.778772 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 14 13:14:10.779781 master-1 kubenswrapper[4740]: I1014 13:14:10.779724 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="d0363272beb3e45e2b47c573ece4971be57a43ed3f3c8423ae048538797b69c8" exitCode=137 Oct 14 13:14:10.779781 master-1 kubenswrapper[4740]: I1014 13:14:10.779745 4740 generic.go:334] "Generic (PLEG): container finished" podID="5268b2f2ae2aef0c7f2e7a6e651ed702" containerID="3ad429dc9dd11eddee5b1383ef737b192bca643be4a667ff5b676aae5c21bf7d" exitCode=137 Oct 14 13:14:10.779781 master-1 kubenswrapper[4740]: I1014 13:14:10.779772 4740 scope.go:117] "RemoveContainer" containerID="034ad11481c70194b2d513c0576933075d6cb443937ebeaa5eed0d095effeec8" Oct 14 13:14:10.934765 master-1 kubenswrapper[4740]: I1014 13:14:10.934690 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/2.log" Oct 14 13:14:10.935291 master-1 kubenswrapper[4740]: I1014 13:14:10.935210 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 14 13:14:10.936120 master-1 kubenswrapper[4740]: I1014 13:14:10.936068 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 14 13:14:10.936749 master-1 kubenswrapper[4740]: I1014 13:14:10.936682 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 14 13:14:10.938221 master-1 kubenswrapper[4740]: I1014 13:14:10.938171 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:10.944960 master-1 kubenswrapper[4740]: I1014 13:14:10.944887 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.045558 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.045653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.045708 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.045765 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.045857 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.046049 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") pod \"5268b2f2ae2aef0c7f2e7a6e651ed702\" (UID: \"5268b2f2ae2aef0c7f2e7a6e651ed702\") " Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.047321 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir" (OuterVolumeSpecName: "data-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.047379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.047424 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.047467 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.047508 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:14:11.050454 master-1 kubenswrapper[4740]: I1014 13:14:11.047547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir" (OuterVolumeSpecName: "log-dir") pod "5268b2f2ae2aef0c7f2e7a6e651ed702" (UID: "5268b2f2ae2aef0c7f2e7a6e651ed702"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:14:11.147591 master-1 kubenswrapper[4740]: I1014 13:14:11.147517 4740 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-log-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:14:11.147591 master-1 kubenswrapper[4740]: I1014 13:14:11.147573 4740 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-data-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:14:11.147591 master-1 kubenswrapper[4740]: I1014 13:14:11.147592 4740 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-usr-local-bin\") on node \"master-1\" DevicePath \"\"" Oct 14 13:14:11.147906 master-1 kubenswrapper[4740]: I1014 13:14:11.147612 4740 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-static-pod-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:14:11.147906 master-1 kubenswrapper[4740]: I1014 13:14:11.147630 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:14:11.147906 master-1 kubenswrapper[4740]: I1014 13:14:11.147646 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5268b2f2ae2aef0c7f2e7a6e651ed702-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:14:11.770735 master-1 kubenswrapper[4740]: I1014 13:14:11.770596 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:11.770735 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:11.770735 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:11.770735 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:11.770735 master-1 kubenswrapper[4740]: I1014 13:14:11.770693 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:11.790346 master-1 kubenswrapper[4740]: I1014 13:14:11.790261 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd/2.log" Oct 14 13:14:11.791210 master-1 kubenswrapper[4740]: I1014 13:14:11.790982 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-rev/0.log" Oct 14 13:14:11.792658 master-1 kubenswrapper[4740]: I1014 13:14:11.792607 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcd-metrics/0.log" Oct 14 13:14:11.793285 master-1 kubenswrapper[4740]: I1014 13:14:11.793220 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_5268b2f2ae2aef0c7f2e7a6e651ed702/etcdctl/0.log" Oct 14 13:14:11.794848 master-1 kubenswrapper[4740]: I1014 13:14:11.794803 4740 scope.go:117] "RemoveContainer" containerID="d0363272beb3e45e2b47c573ece4971be57a43ed3f3c8423ae048538797b69c8" Oct 14 13:14:11.794990 master-1 kubenswrapper[4740]: I1014 13:14:11.794938 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:11.803013 master-1 kubenswrapper[4740]: I1014 13:14:11.802950 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 14 13:14:11.819669 master-1 kubenswrapper[4740]: I1014 13:14:11.819616 4740 scope.go:117] "RemoveContainer" containerID="7e25ecf4c26d3750937766f75c49f56c564cc6efd9d78ab9478ae6db4d0034e2" Oct 14 13:14:11.833143 master-1 kubenswrapper[4740]: I1014 13:14:11.833052 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="5268b2f2ae2aef0c7f2e7a6e651ed702" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" Oct 14 13:14:11.842359 master-1 kubenswrapper[4740]: I1014 13:14:11.842295 4740 scope.go:117] "RemoveContainer" containerID="3388480363fa320a1eccd274b5a9a4cec5eac07b78889513af824dc57bd9ba88" Oct 14 13:14:11.865039 master-1 kubenswrapper[4740]: I1014 13:14:11.864986 4740 scope.go:117] "RemoveContainer" containerID="1c4127aa23a2bb47bd11f50f568887ce25310b4602ae0f737db8b726668165fe" Oct 14 13:14:11.890146 master-1 kubenswrapper[4740]: I1014 13:14:11.890031 4740 scope.go:117] "RemoveContainer" containerID="3ad429dc9dd11eddee5b1383ef737b192bca643be4a667ff5b676aae5c21bf7d" Oct 14 13:14:11.912209 master-1 kubenswrapper[4740]: I1014 13:14:11.912150 4740 scope.go:117] "RemoveContainer" containerID="0a7ed387459e762f8ccb30f7efeb5119321940481a9afbc53d82ca7fb27535c9" Oct 14 13:14:11.935826 master-1 kubenswrapper[4740]: I1014 13:14:11.935751 4740 scope.go:117] "RemoveContainer" containerID="6eb39306f1e750f5ab8ca9dec1568e973919404ed5ef6123d484075d59ac469e" Oct 14 13:14:11.973199 master-1 kubenswrapper[4740]: I1014 13:14:11.973124 4740 scope.go:117] "RemoveContainer" containerID="a17e6de30d12f2a96c26a7839f239dfcb307d54996d4678acc925f2c00d9e55e" Oct 14 13:14:12.771723 master-1 kubenswrapper[4740]: I1014 13:14:12.771602 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:12.771723 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:12.771723 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:12.771723 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:12.771723 master-1 kubenswrapper[4740]: I1014 13:14:12.771701 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:12.960524 master-1 kubenswrapper[4740]: I1014 13:14:12.960432 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5268b2f2ae2aef0c7f2e7a6e651ed702" path="/var/lib/kubelet/pods/5268b2f2ae2aef0c7f2e7a6e651ed702/volumes" Oct 14 13:14:13.771892 master-1 kubenswrapper[4740]: I1014 13:14:13.771767 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:13.771892 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:13.771892 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:13.771892 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:13.771892 master-1 kubenswrapper[4740]: I1014 13:14:13.771881 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:14.303191 master-1 kubenswrapper[4740]: I1014 13:14:14.303139 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:14:14.304045 master-1 kubenswrapper[4740]: I1014 13:14:14.303929 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:14:14.771163 master-1 kubenswrapper[4740]: I1014 13:14:14.771054 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:14.771163 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:14.771163 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:14.771163 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:14.771618 master-1 kubenswrapper[4740]: I1014 13:14:14.771158 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:15.772179 master-1 kubenswrapper[4740]: I1014 13:14:15.772063 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:15.772179 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:15.772179 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:15.772179 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:15.773168 master-1 kubenswrapper[4740]: I1014 13:14:15.772181 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:16.772201 master-1 kubenswrapper[4740]: I1014 13:14:16.772070 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:14:16.772201 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:14:16.772201 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:14:16.772201 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:14:16.772201 master-1 kubenswrapper[4740]: I1014 13:14:16.772173 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:14:16.773399 master-1 kubenswrapper[4740]: I1014 13:14:16.772284 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:14:16.773399 master-1 kubenswrapper[4740]: I1014 13:14:16.773199 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f8c9d5de8cdc8e09521c2a264d3a5c111dd776eb29cce79eace0db63652de74f"} pod="openshift-ingress/router-default-5ddb89f76-xf924" containerMessage="Container router failed startup probe, will be restarted" Oct 14 13:14:16.773399 master-1 kubenswrapper[4740]: I1014 13:14:16.773293 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" containerID="cri-o://f8c9d5de8cdc8e09521c2a264d3a5c111dd776eb29cce79eace0db63652de74f" gracePeriod=3600 Oct 14 13:14:19.303645 master-1 kubenswrapper[4740]: I1014 13:14:19.303527 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:14:19.303645 master-1 kubenswrapper[4740]: I1014 13:14:19.303601 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:14:19.944195 master-1 kubenswrapper[4740]: I1014 13:14:19.944040 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:19.960583 master-1 kubenswrapper[4740]: I1014 13:14:19.960529 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-1" podUID="456bcc90-4ab4-4efe-9da2-21d3c0d06dd6" Oct 14 13:14:19.960583 master-1 kubenswrapper[4740]: I1014 13:14:19.960575 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-1" podUID="456bcc90-4ab4-4efe-9da2-21d3c0d06dd6" Oct 14 13:14:19.988340 master-1 kubenswrapper[4740]: I1014 13:14:19.988263 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:19.990503 master-1 kubenswrapper[4740]: I1014 13:14:19.990431 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:14:20.013263 master-1 kubenswrapper[4740]: I1014 13:14:20.013131 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:14:20.022758 master-1 kubenswrapper[4740]: I1014 13:14:20.022665 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:20.030029 master-1 kubenswrapper[4740]: I1014 13:14:20.029974 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:14:20.863173 master-1 kubenswrapper[4740]: I1014 13:14:20.863080 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="579299c374d3e90207fed9d0ac7add539c5bee12f49cbd11da0109e242ed4ca2" exitCode=0 Oct 14 13:14:20.864131 master-1 kubenswrapper[4740]: I1014 13:14:20.863171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerDied","Data":"579299c374d3e90207fed9d0ac7add539c5bee12f49cbd11da0109e242ed4ca2"} Oct 14 13:14:20.864131 master-1 kubenswrapper[4740]: I1014 13:14:20.863281 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"9e6561c1df9cb119edf95cccc7a4fd3c77acbeed2a0f08e61145e54c7b85ed02"} Oct 14 13:14:21.876094 master-1 kubenswrapper[4740]: I1014 13:14:21.875987 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="a428a767276dd7199fd91dd5f2f6673a06e9529e326ebf71716ff52e3c752eb8" exitCode=0 Oct 14 13:14:21.876986 master-1 kubenswrapper[4740]: I1014 13:14:21.876083 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerDied","Data":"a428a767276dd7199fd91dd5f2f6673a06e9529e326ebf71716ff52e3c752eb8"} Oct 14 13:14:22.894137 master-1 kubenswrapper[4740]: I1014 13:14:22.894067 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="11ca14a2e498d959bad210f3614e1233732965efc52aed100074f0c18857fa17" exitCode=0 Oct 14 13:14:22.894137 master-1 kubenswrapper[4740]: I1014 13:14:22.894133 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerDied","Data":"11ca14a2e498d959bad210f3614e1233732965efc52aed100074f0c18857fa17"} Oct 14 13:14:23.905478 master-1 kubenswrapper[4740]: I1014 13:14:23.905419 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"8ccc55e8766de0b5ea595b51afd74c9ee750d77dbab2d822a06ca94d46f0d682"} Oct 14 13:14:23.905478 master-1 kubenswrapper[4740]: I1014 13:14:23.905467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"d15297c41202b9d0b9c85f5d1690476d1f865b7ea28526de3a3203a97bfd1c48"} Oct 14 13:14:23.905478 master-1 kubenswrapper[4740]: I1014 13:14:23.905479 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"6deb61510a50f20d8a9f8067be1b2fc90640db24c7a18c642c99fb75420a3916"} Oct 14 13:14:24.920300 master-1 kubenswrapper[4740]: I1014 13:14:24.920209 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"38b057ae8b40d687f60b71f7fba2f8022d9c13a14d7ce7d0dc5582d37e59a6b0"} Oct 14 13:14:24.921325 master-1 kubenswrapper[4740]: I1014 13:14:24.921282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"2b1859aa05c2c75eb43d086c9ccd9c86","Type":"ContainerStarted","Data":"12e4d73d95a7dc18b338e89f6b04f58e4c4375db44a191a85f3a89f8fd4875aa"} Oct 14 13:14:24.991217 master-1 kubenswrapper[4740]: I1014 13:14:24.991092 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-1" podStartSLOduration=4.991058219 podStartE2EDuration="4.991058219s" podCreationTimestamp="2025-10-14 13:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:14:24.985549458 +0000 UTC m=+490.795838867" watchObservedRunningTime="2025-10-14 13:14:24.991058219 +0000 UTC m=+490.801347598" Oct 14 13:14:25.023845 master-1 kubenswrapper[4740]: I1014 13:14:25.023699 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:29.304289 master-1 kubenswrapper[4740]: I1014 13:14:29.304181 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" start-of-body= Oct 14 13:14:29.305515 master-1 kubenswrapper[4740]: I1014 13:14:29.304349 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" Oct 14 13:14:30.023040 master-1 kubenswrapper[4740]: I1014 13:14:30.022949 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:14:34.304622 master-1 kubenswrapper[4740]: I1014 13:14:34.304527 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" start-of-body= Oct 14 13:14:34.305481 master-1 kubenswrapper[4740]: I1014 13:14:34.304640 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" Oct 14 13:14:39.305790 master-1 kubenswrapper[4740]: I1014 13:14:39.305641 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:39.306908 master-1 kubenswrapper[4740]: I1014 13:14:39.305823 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:40.106147 master-1 kubenswrapper[4740]: I1014 13:14:40.106047 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:14:41.023803 master-1 kubenswrapper[4740]: I1014 13:14:41.023708 4740 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:41.024441 master-1 kubenswrapper[4740]: I1014 13:14:41.023805 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:43.005093 master-1 kubenswrapper[4740]: E1014 13:14:43.004930 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podUID="cc579fa5-c1e0-40ed-b1f3-e953a42e74d6" Oct 14 13:14:43.005093 master-1 kubenswrapper[4740]: E1014 13:14:43.005009 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podUID="180ced15-1fb1-464d-85f2-0bcc0d836dab" Oct 14 13:14:43.048495 master-1 kubenswrapper[4740]: I1014 13:14:43.048421 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:14:43.048930 master-1 kubenswrapper[4740]: I1014 13:14:43.048435 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:14:44.306705 master-1 kubenswrapper[4740]: I1014 13:14:44.306577 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:44.307743 master-1 kubenswrapper[4740]: I1014 13:14:44.306708 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:45.484497 master-1 kubenswrapper[4740]: I1014 13:14:45.484385 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:14:45.486077 master-1 kubenswrapper[4740]: E1014 13:14:45.484752 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:16:47.48470023 +0000 UTC m=+633.294989619 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:14:45.587089 master-1 kubenswrapper[4740]: I1014 13:14:45.586919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:14:45.587434 master-1 kubenswrapper[4740]: E1014 13:14:45.587352 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:16:47.587301328 +0000 UTC m=+633.397590897 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:14:49.307255 master-1 kubenswrapper[4740]: I1014 13:14:49.307124 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:49.308223 master-1 kubenswrapper[4740]: I1014 13:14:49.307222 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:51.023951 master-1 kubenswrapper[4740]: I1014 13:14:51.023865 4740 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:51.024920 master-1 kubenswrapper[4740]: I1014 13:14:51.023981 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:54.307858 master-1 kubenswrapper[4740]: I1014 13:14:54.307748 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:54.308851 master-1 kubenswrapper[4740]: I1014 13:14:54.307858 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:59.309274 master-1 kubenswrapper[4740]: I1014 13:14:59.309139 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" start-of-body= Oct 14 13:14:59.310593 master-1 kubenswrapper[4740]: I1014 13:14:59.309294 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" Oct 14 13:15:01.024134 master-1 kubenswrapper[4740]: I1014 13:15:01.024055 4740 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:15:01.024134 master-1 kubenswrapper[4740]: I1014 13:15:01.024132 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:15:03.199158 master-1 kubenswrapper[4740]: I1014 13:15:03.199051 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/2.log" Oct 14 13:15:03.200209 master-1 kubenswrapper[4740]: I1014 13:15:03.200113 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/1.log" Oct 14 13:15:03.200945 master-1 kubenswrapper[4740]: I1014 13:15:03.200866 4740 generic.go:334] "Generic (PLEG): container finished" podID="398ba6fd-0f8f-46af-b690-61a6eec9176b" containerID="03ca19c1b466ba0fcc071d9bfb4a5ed1c705eab7bdb06858b96afeb5d268130b" exitCode=1 Oct 14 13:15:03.201077 master-1 kubenswrapper[4740]: I1014 13:15:03.200964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerDied","Data":"03ca19c1b466ba0fcc071d9bfb4a5ed1c705eab7bdb06858b96afeb5d268130b"} Oct 14 13:15:03.201077 master-1 kubenswrapper[4740]: I1014 13:15:03.201039 4740 scope.go:117] "RemoveContainer" containerID="4642cf87216d34a41602fbb9cf593d0d329fd43c67ed7b264d9a3b2b3022daaf" Oct 14 13:15:03.201827 master-1 kubenswrapper[4740]: I1014 13:15:03.201758 4740 scope.go:117] "RemoveContainer" containerID="03ca19c1b466ba0fcc071d9bfb4a5ed1c705eab7bdb06858b96afeb5d268130b" Oct 14 13:15:03.202286 master-1 kubenswrapper[4740]: E1014 13:15:03.202185 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-xhdjt_openshift-ingress-operator(398ba6fd-0f8f-46af-b690-61a6eec9176b)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" podUID="398ba6fd-0f8f-46af-b690-61a6eec9176b" Oct 14 13:15:03.205549 master-1 kubenswrapper[4740]: I1014 13:15:03.205493 4740 generic.go:334] "Generic (PLEG): container finished" podID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerID="f8c9d5de8cdc8e09521c2a264d3a5c111dd776eb29cce79eace0db63652de74f" exitCode=0 Oct 14 13:15:03.205670 master-1 kubenswrapper[4740]: I1014 13:15:03.205547 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerDied","Data":"f8c9d5de8cdc8e09521c2a264d3a5c111dd776eb29cce79eace0db63652de74f"} Oct 14 13:15:03.205670 master-1 kubenswrapper[4740]: I1014 13:15:03.205581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerStarted","Data":"574dcc96f027c302746e71fa1b6d9e59728f15441bda5dda38c7fb4f50571750"} Oct 14 13:15:03.254042 master-1 kubenswrapper[4740]: I1014 13:15:03.253978 4740 scope.go:117] "RemoveContainer" containerID="57f4d6aac1f3c80fb4d6e8a8343432ff9667911716e629d1c9aa8b443a819f98" Oct 14 13:15:03.767837 master-1 kubenswrapper[4740]: I1014 13:15:03.767767 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:15:03.768051 master-1 kubenswrapper[4740]: I1014 13:15:03.767960 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:15:03.771117 master-1 kubenswrapper[4740]: I1014 13:15:03.771084 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:03.771117 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:03.771117 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:03.771117 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:03.771347 master-1 kubenswrapper[4740]: I1014 13:15:03.771137 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:04.220549 master-1 kubenswrapper[4740]: I1014 13:15:04.220368 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/2.log" Oct 14 13:15:04.310349 master-1 kubenswrapper[4740]: I1014 13:15:04.310205 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:15:04.310569 master-1 kubenswrapper[4740]: I1014 13:15:04.310380 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:15:04.771306 master-1 kubenswrapper[4740]: I1014 13:15:04.771006 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:04.771306 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:04.771306 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:04.771306 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:04.771306 master-1 kubenswrapper[4740]: I1014 13:15:04.771144 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:05.771683 master-1 kubenswrapper[4740]: I1014 13:15:05.771586 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:05.771683 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:05.771683 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:05.771683 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:05.771683 master-1 kubenswrapper[4740]: I1014 13:15:05.771681 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:06.771112 master-1 kubenswrapper[4740]: I1014 13:15:06.771002 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:06.771112 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:06.771112 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:06.771112 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:06.771112 master-1 kubenswrapper[4740]: I1014 13:15:06.771100 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:07.771273 master-1 kubenswrapper[4740]: I1014 13:15:07.771158 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:07.771273 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:07.771273 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:07.771273 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:07.772585 master-1 kubenswrapper[4740]: I1014 13:15:07.771288 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:08.771522 master-1 kubenswrapper[4740]: I1014 13:15:08.771423 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:08.771522 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:08.771522 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:08.771522 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:08.772652 master-1 kubenswrapper[4740]: I1014 13:15:08.771526 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:09.311184 master-1 kubenswrapper[4740]: I1014 13:15:09.311096 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:15:09.311603 master-1 kubenswrapper[4740]: I1014 13:15:09.311197 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:15:09.771197 master-1 kubenswrapper[4740]: I1014 13:15:09.771080 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:09.771197 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:09.771197 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:09.771197 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:09.772173 master-1 kubenswrapper[4740]: I1014 13:15:09.771204 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:10.103808 master-1 kubenswrapper[4740]: I1014 13:15:10.103611 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:15:10.771336 master-1 kubenswrapper[4740]: I1014 13:15:10.771203 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:10.771336 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:10.771336 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:10.771336 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:10.772465 master-1 kubenswrapper[4740]: I1014 13:15:10.771335 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:11.024748 master-1 kubenswrapper[4740]: I1014 13:15:11.024572 4740 patch_prober.go:28] interesting pod/etcd-master-1 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" start-of-body= Oct 14 13:15:11.024748 master-1 kubenswrapper[4740]: I1014 13:15:11.024696 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": context deadline exceeded" Oct 14 13:15:11.771364 master-1 kubenswrapper[4740]: I1014 13:15:11.771289 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:11.771364 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:11.771364 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:11.771364 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:11.771364 master-1 kubenswrapper[4740]: I1014 13:15:11.771368 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:12.771922 master-1 kubenswrapper[4740]: I1014 13:15:12.771841 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:12.771922 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:12.771922 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:12.771922 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:12.772504 master-1 kubenswrapper[4740]: I1014 13:15:12.771952 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:13.342616 master-1 kubenswrapper[4740]: I1014 13:15:13.342503 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:15:13.770511 master-1 kubenswrapper[4740]: I1014 13:15:13.770174 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:13.770511 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:13.770511 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:13.770511 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:13.770949 master-1 kubenswrapper[4740]: I1014 13:15:13.770513 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:14.771451 master-1 kubenswrapper[4740]: I1014 13:15:14.771380 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:14.771451 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:14.771451 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:14.771451 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:14.772505 master-1 kubenswrapper[4740]: I1014 13:15:14.771459 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:15.771865 master-1 kubenswrapper[4740]: I1014 13:15:15.771782 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:15.771865 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:15.771865 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:15.771865 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:15.772564 master-1 kubenswrapper[4740]: I1014 13:15:15.771876 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:16.771668 master-1 kubenswrapper[4740]: I1014 13:15:16.771583 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:16.771668 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:16.771668 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:16.771668 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:16.771668 master-1 kubenswrapper[4740]: I1014 13:15:16.771673 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:16.944351 master-1 kubenswrapper[4740]: I1014 13:15:16.944268 4740 scope.go:117] "RemoveContainer" containerID="03ca19c1b466ba0fcc071d9bfb4a5ed1c705eab7bdb06858b96afeb5d268130b" Oct 14 13:15:16.944716 master-1 kubenswrapper[4740]: E1014 13:15:16.944544 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-766ddf4575-xhdjt_openshift-ingress-operator(398ba6fd-0f8f-46af-b690-61a6eec9176b)\"" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" podUID="398ba6fd-0f8f-46af-b690-61a6eec9176b" Oct 14 13:15:17.771807 master-1 kubenswrapper[4740]: I1014 13:15:17.771701 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:17.771807 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:17.771807 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:17.771807 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:17.772795 master-1 kubenswrapper[4740]: I1014 13:15:17.771823 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:17.988820 master-1 kubenswrapper[4740]: E1014 13:15:17.988695 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:17.989110 master-1 kubenswrapper[4740]: E1014 13:15:17.988873 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:18.488829765 +0000 UTC m=+544.299119134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:18.497003 master-1 kubenswrapper[4740]: E1014 13:15:18.496891 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:18.497003 master-1 kubenswrapper[4740]: E1014 13:15:18.496989 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:19.496970083 +0000 UTC m=+545.307259412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:18.770995 master-1 kubenswrapper[4740]: I1014 13:15:18.770798 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:18.770995 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:18.770995 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:18.770995 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:18.770995 master-1 kubenswrapper[4740]: I1014 13:15:18.770917 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:19.512077 master-1 kubenswrapper[4740]: E1014 13:15:19.511990 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:19.512631 master-1 kubenswrapper[4740]: E1014 13:15:19.512103 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.51207933 +0000 UTC m=+547.322368729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:19.771989 master-1 kubenswrapper[4740]: I1014 13:15:19.771813 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:19.771989 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:19.771989 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:19.771989 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:19.771989 master-1 kubenswrapper[4740]: I1014 13:15:19.771907 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:20.039870 master-1 kubenswrapper[4740]: I1014 13:15:20.039656 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-1" Oct 14 13:15:20.057109 master-1 kubenswrapper[4740]: I1014 13:15:20.057035 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-1" Oct 14 13:15:20.772102 master-1 kubenswrapper[4740]: I1014 13:15:20.771987 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:20.772102 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:20.772102 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:20.772102 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:20.773216 master-1 kubenswrapper[4740]: I1014 13:15:20.772116 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:21.540108 master-1 kubenswrapper[4740]: E1014 13:15:21.540001 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:21.540475 master-1 kubenswrapper[4740]: E1014 13:15:21.540129 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:25.540104851 +0000 UTC m=+551.350394210 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:21.771343 master-1 kubenswrapper[4740]: I1014 13:15:21.771201 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:21.771343 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:21.771343 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:21.771343 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:21.771343 master-1 kubenswrapper[4740]: I1014 13:15:21.771321 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:22.770498 master-1 kubenswrapper[4740]: I1014 13:15:22.770347 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:22.770498 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:22.770498 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:22.770498 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:22.770498 master-1 kubenswrapper[4740]: I1014 13:15:22.770459 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:23.770806 master-1 kubenswrapper[4740]: I1014 13:15:23.770686 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:23.770806 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:23.770806 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:23.770806 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:23.770806 master-1 kubenswrapper[4740]: I1014 13:15:23.770785 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:24.771416 master-1 kubenswrapper[4740]: I1014 13:15:24.771302 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:24.771416 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:24.771416 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:24.771416 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:24.771416 master-1 kubenswrapper[4740]: I1014 13:15:24.771397 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:25.604677 master-1 kubenswrapper[4740]: E1014 13:15:25.604555 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:25.605007 master-1 kubenswrapper[4740]: E1014 13:15:25.604712 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:33.604679237 +0000 UTC m=+559.414968596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:25.771096 master-1 kubenswrapper[4740]: I1014 13:15:25.770998 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:25.771096 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:25.771096 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:25.771096 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:25.771096 master-1 kubenswrapper[4740]: I1014 13:15:25.771088 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:26.771300 master-1 kubenswrapper[4740]: I1014 13:15:26.771196 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:26.771300 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:26.771300 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:26.771300 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:26.772401 master-1 kubenswrapper[4740]: I1014 13:15:26.771335 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:27.771452 master-1 kubenswrapper[4740]: I1014 13:15:27.771338 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:27.771452 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:27.771452 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:27.771452 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:27.772659 master-1 kubenswrapper[4740]: I1014 13:15:27.771453 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:28.770774 master-1 kubenswrapper[4740]: I1014 13:15:28.770689 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:28.770774 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:28.770774 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:28.770774 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:28.771336 master-1 kubenswrapper[4740]: I1014 13:15:28.770790 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:29.771128 master-1 kubenswrapper[4740]: I1014 13:15:29.771066 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:29.771128 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:29.771128 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:29.771128 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:29.772027 master-1 kubenswrapper[4740]: I1014 13:15:29.771155 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:30.771084 master-1 kubenswrapper[4740]: I1014 13:15:30.770979 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:30.771084 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:30.771084 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:30.771084 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:30.771084 master-1 kubenswrapper[4740]: I1014 13:15:30.771065 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:31.771291 master-1 kubenswrapper[4740]: I1014 13:15:31.771134 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:31.771291 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:31.771291 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:31.771291 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:31.772501 master-1 kubenswrapper[4740]: I1014 13:15:31.771366 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:31.945431 master-1 kubenswrapper[4740]: I1014 13:15:31.945170 4740 scope.go:117] "RemoveContainer" containerID="03ca19c1b466ba0fcc071d9bfb4a5ed1c705eab7bdb06858b96afeb5d268130b" Oct 14 13:15:32.434966 master-1 kubenswrapper[4740]: I1014 13:15:32.434790 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/2.log" Oct 14 13:15:32.435622 master-1 kubenswrapper[4740]: I1014 13:15:32.435556 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt" event={"ID":"398ba6fd-0f8f-46af-b690-61a6eec9176b","Type":"ContainerStarted","Data":"00ef0dd491be83ec18b39dbf4307cddef07ddfc609e0b405f96ff91489826e91"} Oct 14 13:15:32.770456 master-1 kubenswrapper[4740]: I1014 13:15:32.770382 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:32.770456 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:32.770456 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:32.770456 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:32.770456 master-1 kubenswrapper[4740]: I1014 13:15:32.770450 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:33.620133 master-1 kubenswrapper[4740]: E1014 13:15:33.620018 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:33.620846 master-1 kubenswrapper[4740]: E1014 13:15:33.620144 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:49.620108611 +0000 UTC m=+575.430397980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:33.771428 master-1 kubenswrapper[4740]: I1014 13:15:33.771306 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:33.771428 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:33.771428 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:33.771428 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:33.771916 master-1 kubenswrapper[4740]: I1014 13:15:33.771472 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:34.771488 master-1 kubenswrapper[4740]: I1014 13:15:34.771363 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:34.771488 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:34.771488 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:34.771488 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:34.772697 master-1 kubenswrapper[4740]: I1014 13:15:34.771493 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:35.771749 master-1 kubenswrapper[4740]: I1014 13:15:35.771684 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:35.771749 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:35.771749 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:35.771749 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:35.771749 master-1 kubenswrapper[4740]: I1014 13:15:35.771757 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:36.770967 master-1 kubenswrapper[4740]: I1014 13:15:36.770878 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:36.770967 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:36.770967 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:36.770967 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:36.771415 master-1 kubenswrapper[4740]: I1014 13:15:36.770983 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:37.770912 master-1 kubenswrapper[4740]: I1014 13:15:37.770835 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:37.770912 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:37.770912 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:37.770912 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:37.771817 master-1 kubenswrapper[4740]: I1014 13:15:37.770945 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:38.771318 master-1 kubenswrapper[4740]: I1014 13:15:38.771222 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:38.771318 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:38.771318 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:38.771318 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:38.772495 master-1 kubenswrapper[4740]: I1014 13:15:38.772406 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:39.771444 master-1 kubenswrapper[4740]: I1014 13:15:39.771322 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:39.771444 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:39.771444 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:39.771444 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:39.771444 master-1 kubenswrapper[4740]: I1014 13:15:39.771431 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:40.111343 master-1 kubenswrapper[4740]: I1014 13:15:40.111157 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:15:40.771362 master-1 kubenswrapper[4740]: I1014 13:15:40.771276 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:40.771362 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:40.771362 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:40.771362 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:40.772437 master-1 kubenswrapper[4740]: I1014 13:15:40.771376 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:41.771073 master-1 kubenswrapper[4740]: I1014 13:15:41.770995 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:41.771073 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:41.771073 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:41.771073 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:41.771430 master-1 kubenswrapper[4740]: I1014 13:15:41.771094 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:42.771512 master-1 kubenswrapper[4740]: I1014 13:15:42.771409 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:42.771512 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:42.771512 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:42.771512 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:42.771512 master-1 kubenswrapper[4740]: I1014 13:15:42.771506 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:43.771683 master-1 kubenswrapper[4740]: I1014 13:15:43.771570 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:43.771683 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:43.771683 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:43.771683 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:43.771683 master-1 kubenswrapper[4740]: I1014 13:15:43.771650 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:44.771743 master-1 kubenswrapper[4740]: I1014 13:15:44.771620 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:44.771743 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:44.771743 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:44.771743 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:44.772933 master-1 kubenswrapper[4740]: I1014 13:15:44.771749 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:45.771683 master-1 kubenswrapper[4740]: I1014 13:15:45.771565 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:45.771683 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:45.771683 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:45.771683 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:45.771683 master-1 kubenswrapper[4740]: I1014 13:15:45.771668 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:46.771273 master-1 kubenswrapper[4740]: I1014 13:15:46.771163 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:46.771273 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:46.771273 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:46.771273 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:46.771699 master-1 kubenswrapper[4740]: I1014 13:15:46.771304 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:47.770638 master-1 kubenswrapper[4740]: I1014 13:15:47.770554 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:47.770638 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:47.770638 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:47.770638 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:47.770638 master-1 kubenswrapper[4740]: I1014 13:15:47.770630 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:48.771827 master-1 kubenswrapper[4740]: I1014 13:15:48.771718 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:48.771827 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:48.771827 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:48.771827 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:48.771827 master-1 kubenswrapper[4740]: I1014 13:15:48.771812 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:49.648748 master-1 kubenswrapper[4740]: E1014 13:15:49.648656 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:49.648748 master-1 kubenswrapper[4740]: E1014 13:15:49.648768 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:16:21.648744101 +0000 UTC m=+607.459033470 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:15:49.770619 master-1 kubenswrapper[4740]: I1014 13:15:49.770509 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:49.770619 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:49.770619 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:49.770619 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:49.770619 master-1 kubenswrapper[4740]: I1014 13:15:49.770606 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:50.771744 master-1 kubenswrapper[4740]: I1014 13:15:50.771630 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:50.771744 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:50.771744 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:50.771744 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:50.771744 master-1 kubenswrapper[4740]: I1014 13:15:50.771719 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:51.771422 master-1 kubenswrapper[4740]: I1014 13:15:51.771294 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:51.771422 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:51.771422 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:51.771422 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:51.771422 master-1 kubenswrapper[4740]: I1014 13:15:51.771414 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:52.771441 master-1 kubenswrapper[4740]: I1014 13:15:52.771311 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:52.771441 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:52.771441 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:52.771441 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:52.771441 master-1 kubenswrapper[4740]: I1014 13:15:52.771425 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:53.771016 master-1 kubenswrapper[4740]: I1014 13:15:53.770889 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:53.771016 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:53.771016 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:53.771016 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:53.771016 master-1 kubenswrapper[4740]: I1014 13:15:53.771011 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:54.233594 master-1 kubenswrapper[4740]: I1014 13:15:54.233362 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-1"] Oct 14 13:15:54.234576 master-1 kubenswrapper[4740]: E1014 13:15:54.233746 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6927f794-9b47-4a35-b412-78b7d24f7622" containerName="installer" Oct 14 13:15:54.234576 master-1 kubenswrapper[4740]: I1014 13:15:54.233768 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6927f794-9b47-4a35-b412-78b7d24f7622" containerName="installer" Oct 14 13:15:54.234576 master-1 kubenswrapper[4740]: I1014 13:15:54.233961 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6927f794-9b47-4a35-b412-78b7d24f7622" containerName="installer" Oct 14 13:15:54.234811 master-1 kubenswrapper[4740]: I1014 13:15:54.234743 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.239373 master-1 kubenswrapper[4740]: I1014 13:15:54.239323 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-p7d8w" Oct 14 13:15:54.244073 master-1 kubenswrapper[4740]: I1014 13:15:54.243983 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-1"] Oct 14 13:15:54.429667 master-1 kubenswrapper[4740]: I1014 13:15:54.429559 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c165323a-2806-46b2-b073-0dc58b978bc1-kube-api-access\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.429667 master-1 kubenswrapper[4740]: I1014 13:15:54.429639 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-var-lock\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.430058 master-1 kubenswrapper[4740]: I1014 13:15:54.429711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.530799 master-1 kubenswrapper[4740]: I1014 13:15:54.530729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c165323a-2806-46b2-b073-0dc58b978bc1-kube-api-access\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.530799 master-1 kubenswrapper[4740]: I1014 13:15:54.530801 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-var-lock\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.531219 master-1 kubenswrapper[4740]: I1014 13:15:54.530877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.531219 master-1 kubenswrapper[4740]: I1014 13:15:54.531025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-kubelet-dir\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.531219 master-1 kubenswrapper[4740]: I1014 13:15:54.531020 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-var-lock\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.565012 master-1 kubenswrapper[4740]: I1014 13:15:54.564924 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c165323a-2806-46b2-b073-0dc58b978bc1-kube-api-access\") pod \"installer-2-master-1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.593339 master-1 kubenswrapper[4740]: I1014 13:15:54.593263 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:15:54.770979 master-1 kubenswrapper[4740]: I1014 13:15:54.770925 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:54.770979 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:54.770979 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:54.770979 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:54.772003 master-1 kubenswrapper[4740]: I1014 13:15:54.770991 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:55.080203 master-1 kubenswrapper[4740]: I1014 13:15:55.080096 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-1"] Oct 14 13:15:55.091467 master-1 kubenswrapper[4740]: W1014 13:15:55.091342 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc165323a_2806_46b2_b073_0dc58b978bc1.slice/crio-f6ce3c40126f852f098680c71a875fbc39568c856c76f5f5fdf498fc0afa8d3e WatchSource:0}: Error finding container f6ce3c40126f852f098680c71a875fbc39568c856c76f5f5fdf498fc0afa8d3e: Status 404 returned error can't find the container with id f6ce3c40126f852f098680c71a875fbc39568c856c76f5f5fdf498fc0afa8d3e Oct 14 13:15:55.599085 master-1 kubenswrapper[4740]: I1014 13:15:55.598965 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c165323a-2806-46b2-b073-0dc58b978bc1","Type":"ContainerStarted","Data":"fc6ed49f7d7681e175f0dbe0ce31e2f5ed9664eb3558efb48080580f7bec09c1"} Oct 14 13:15:55.599085 master-1 kubenswrapper[4740]: I1014 13:15:55.599032 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c165323a-2806-46b2-b073-0dc58b978bc1","Type":"ContainerStarted","Data":"f6ce3c40126f852f098680c71a875fbc39568c856c76f5f5fdf498fc0afa8d3e"} Oct 14 13:15:55.631298 master-1 kubenswrapper[4740]: I1014 13:15:55.631016 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-1" podStartSLOduration=1.630985277 podStartE2EDuration="1.630985277s" podCreationTimestamp="2025-10-14 13:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:15:55.630210609 +0000 UTC m=+581.440499968" watchObservedRunningTime="2025-10-14 13:15:55.630985277 +0000 UTC m=+581.441274646" Oct 14 13:15:55.771222 master-1 kubenswrapper[4740]: I1014 13:15:55.771118 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:55.771222 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:55.771222 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:55.771222 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:55.771222 master-1 kubenswrapper[4740]: I1014 13:15:55.771205 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:56.771173 master-1 kubenswrapper[4740]: I1014 13:15:56.771077 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:56.771173 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:56.771173 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:56.771173 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:56.771173 master-1 kubenswrapper[4740]: I1014 13:15:56.771159 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:57.771337 master-1 kubenswrapper[4740]: I1014 13:15:57.771258 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:57.771337 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:57.771337 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:57.771337 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:57.771980 master-1 kubenswrapper[4740]: I1014 13:15:57.771367 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:58.771602 master-1 kubenswrapper[4740]: I1014 13:15:58.771531 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:58.771602 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:58.771602 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:58.771602 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:58.772128 master-1 kubenswrapper[4740]: I1014 13:15:58.771625 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:15:59.771913 master-1 kubenswrapper[4740]: I1014 13:15:59.771792 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:15:59.771913 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:15:59.771913 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:15:59.771913 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:15:59.772920 master-1 kubenswrapper[4740]: I1014 13:15:59.771901 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:00.771522 master-1 kubenswrapper[4740]: I1014 13:16:00.771409 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:00.771522 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:00.771522 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:00.771522 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:00.771522 master-1 kubenswrapper[4740]: I1014 13:16:00.771490 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:01.770864 master-1 kubenswrapper[4740]: I1014 13:16:01.770716 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:01.770864 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:01.770864 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:01.770864 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:01.770864 master-1 kubenswrapper[4740]: I1014 13:16:01.770859 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:02.771532 master-1 kubenswrapper[4740]: I1014 13:16:02.771425 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:02.771532 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:02.771532 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:02.771532 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:02.771532 master-1 kubenswrapper[4740]: I1014 13:16:02.771505 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:03.773784 master-1 kubenswrapper[4740]: I1014 13:16:03.773676 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:03.773784 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:03.773784 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:03.773784 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:03.773784 master-1 kubenswrapper[4740]: I1014 13:16:03.773763 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:04.771946 master-1 kubenswrapper[4740]: I1014 13:16:04.771824 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:04.771946 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:04.771946 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:04.771946 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:04.772447 master-1 kubenswrapper[4740]: I1014 13:16:04.771956 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:05.770979 master-1 kubenswrapper[4740]: I1014 13:16:05.770862 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:05.770979 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:05.770979 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:05.770979 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:05.770979 master-1 kubenswrapper[4740]: I1014 13:16:05.770948 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:06.771995 master-1 kubenswrapper[4740]: I1014 13:16:06.771884 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:06.771995 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:06.771995 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:06.771995 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:06.771995 master-1 kubenswrapper[4740]: I1014 13:16:06.771980 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:07.771187 master-1 kubenswrapper[4740]: I1014 13:16:07.771069 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:07.771187 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:07.771187 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:07.771187 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:07.771187 master-1 kubenswrapper[4740]: I1014 13:16:07.771162 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:08.771420 master-1 kubenswrapper[4740]: I1014 13:16:08.771329 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:08.771420 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:08.771420 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:08.771420 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:08.772408 master-1 kubenswrapper[4740]: I1014 13:16:08.771440 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:09.771714 master-1 kubenswrapper[4740]: I1014 13:16:09.771597 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:09.771714 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:09.771714 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:09.771714 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:09.771714 master-1 kubenswrapper[4740]: I1014 13:16:09.771689 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:10.110290 master-1 kubenswrapper[4740]: I1014 13:16:10.110088 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:16:10.771808 master-1 kubenswrapper[4740]: I1014 13:16:10.771676 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:10.771808 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:10.771808 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:10.771808 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:10.771808 master-1 kubenswrapper[4740]: I1014 13:16:10.771787 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:11.772218 master-1 kubenswrapper[4740]: I1014 13:16:11.772058 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:11.772218 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:11.772218 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:11.772218 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:11.772218 master-1 kubenswrapper[4740]: I1014 13:16:11.772170 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:12.772033 master-1 kubenswrapper[4740]: I1014 13:16:12.771846 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:12.772033 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:12.772033 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:12.772033 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:12.772033 master-1 kubenswrapper[4740]: I1014 13:16:12.771939 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:13.770932 master-1 kubenswrapper[4740]: I1014 13:16:13.770842 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:13.770932 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:13.770932 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:13.770932 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:13.770932 master-1 kubenswrapper[4740]: I1014 13:16:13.770925 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:14.770215 master-1 kubenswrapper[4740]: I1014 13:16:14.770150 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:14.770215 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:14.770215 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:14.770215 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:14.771187 master-1 kubenswrapper[4740]: I1014 13:16:14.770282 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:15.770980 master-1 kubenswrapper[4740]: I1014 13:16:15.770877 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:15.770980 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:15.770980 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:15.770980 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:15.770980 master-1 kubenswrapper[4740]: I1014 13:16:15.770966 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:16.770734 master-1 kubenswrapper[4740]: I1014 13:16:16.770636 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:16.770734 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:16.770734 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:16.770734 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:16.770734 master-1 kubenswrapper[4740]: I1014 13:16:16.770723 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:17.127766 master-1 kubenswrapper[4740]: I1014 13:16:17.127627 4740 scope.go:117] "RemoveContainer" containerID="9413841217e365c44535d9cbb2430590ab6343e3232163787d636ec31207723f" Oct 14 13:16:17.168759 master-1 kubenswrapper[4740]: I1014 13:16:17.168673 4740 scope.go:117] "RemoveContainer" containerID="f385d8dcaa94ab3187b83b710fe57b0f187750d657672640e6af7430e879bf5e" Oct 14 13:16:17.197071 master-1 kubenswrapper[4740]: I1014 13:16:17.196993 4740 scope.go:117] "RemoveContainer" containerID="d3ed9cbb6f5f77f97002c046a3a9e3e350cee658f8b7fea03e390b2ecfd3b928" Oct 14 13:16:17.217647 master-1 kubenswrapper[4740]: I1014 13:16:17.217552 4740 scope.go:117] "RemoveContainer" containerID="8092a9e6ffee3c6072e897161e78ff3767262aeb08c415263028b74755398c8c" Oct 14 13:16:17.771785 master-1 kubenswrapper[4740]: I1014 13:16:17.771647 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:17.771785 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:17.771785 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:17.771785 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:17.771785 master-1 kubenswrapper[4740]: I1014 13:16:17.771775 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:18.771440 master-1 kubenswrapper[4740]: I1014 13:16:18.771335 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:18.771440 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:18.771440 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:18.771440 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:18.772551 master-1 kubenswrapper[4740]: I1014 13:16:18.771456 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:19.771686 master-1 kubenswrapper[4740]: I1014 13:16:19.771623 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:19.771686 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:19.771686 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:19.771686 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:19.772755 master-1 kubenswrapper[4740]: I1014 13:16:19.771689 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:20.771036 master-1 kubenswrapper[4740]: I1014 13:16:20.770961 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:20.771036 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:20.771036 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:20.771036 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:20.771895 master-1 kubenswrapper[4740]: I1014 13:16:20.771057 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:21.665597 master-1 kubenswrapper[4740]: E1014 13:16:21.665484 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:16:21.666634 master-1 kubenswrapper[4740]: E1014 13:16:21.665679 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:17:25.665635755 +0000 UTC m=+671.475925254 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:16:21.771719 master-1 kubenswrapper[4740]: I1014 13:16:21.771627 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:21.771719 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:21.771719 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:21.771719 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:21.772046 master-1 kubenswrapper[4740]: I1014 13:16:21.771737 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:22.770771 master-1 kubenswrapper[4740]: I1014 13:16:22.770681 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:22.770771 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:22.770771 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:22.770771 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:22.770771 master-1 kubenswrapper[4740]: I1014 13:16:22.770745 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:23.771734 master-1 kubenswrapper[4740]: I1014 13:16:23.771621 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:23.771734 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:23.771734 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:23.771734 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:23.771734 master-1 kubenswrapper[4740]: I1014 13:16:23.771720 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:24.771577 master-1 kubenswrapper[4740]: I1014 13:16:24.771441 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:24.771577 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:24.771577 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:24.771577 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:24.772623 master-1 kubenswrapper[4740]: I1014 13:16:24.771572 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:25.771047 master-1 kubenswrapper[4740]: I1014 13:16:25.770929 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:25.771047 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:25.771047 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:25.771047 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:25.771047 master-1 kubenswrapper[4740]: I1014 13:16:25.771039 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:26.771805 master-1 kubenswrapper[4740]: I1014 13:16:26.771706 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:26.771805 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:26.771805 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:26.771805 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:26.771805 master-1 kubenswrapper[4740]: I1014 13:16:26.771794 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:27.771486 master-1 kubenswrapper[4740]: I1014 13:16:27.771359 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:27.771486 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:27.771486 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:27.771486 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:27.773317 master-1 kubenswrapper[4740]: I1014 13:16:27.771507 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:28.771745 master-1 kubenswrapper[4740]: I1014 13:16:28.771547 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:28.771745 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:28.771745 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:28.771745 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:28.771745 master-1 kubenswrapper[4740]: I1014 13:16:28.771698 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:29.770559 master-1 kubenswrapper[4740]: I1014 13:16:29.770461 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:29.770559 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:29.770559 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:29.770559 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:29.771324 master-1 kubenswrapper[4740]: I1014 13:16:29.770564 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:30.771755 master-1 kubenswrapper[4740]: I1014 13:16:30.771688 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:30.771755 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:30.771755 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:30.771755 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:30.772550 master-1 kubenswrapper[4740]: I1014 13:16:30.771774 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:31.771415 master-1 kubenswrapper[4740]: I1014 13:16:31.771301 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:31.771415 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:31.771415 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:31.771415 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:31.772449 master-1 kubenswrapper[4740]: I1014 13:16:31.771423 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:32.779156 master-1 kubenswrapper[4740]: I1014 13:16:32.779078 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:32.779156 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:32.779156 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:32.779156 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:32.780144 master-1 kubenswrapper[4740]: I1014 13:16:32.779172 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:33.654883 master-1 kubenswrapper[4740]: I1014 13:16:33.654779 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:16:33.655425 master-1 kubenswrapper[4740]: I1014 13:16:33.655338 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" containerID="cri-o://af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00" gracePeriod=135 Oct 14 13:16:33.655501 master-1 kubenswrapper[4740]: I1014 13:16:33.655402 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641" gracePeriod=135 Oct 14 13:16:33.655552 master-1 kubenswrapper[4740]: I1014 13:16:33.655513 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" containerID="cri-o://eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64" gracePeriod=135 Oct 14 13:16:33.655599 master-1 kubenswrapper[4740]: I1014 13:16:33.655422 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac" gracePeriod=135 Oct 14 13:16:33.655822 master-1 kubenswrapper[4740]: I1014 13:16:33.655749 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41" gracePeriod=135 Oct 14 13:16:33.658541 master-1 kubenswrapper[4740]: I1014 13:16:33.658494 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:16:33.660120 master-1 kubenswrapper[4740]: E1014 13:16:33.660084 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" Oct 14 13:16:33.660367 master-1 kubenswrapper[4740]: I1014 13:16:33.660341 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" Oct 14 13:16:33.660563 master-1 kubenswrapper[4740]: E1014 13:16:33.660533 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="setup" Oct 14 13:16:33.660766 master-1 kubenswrapper[4740]: I1014 13:16:33.660743 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="setup" Oct 14 13:16:33.660909 master-1 kubenswrapper[4740]: E1014 13:16:33.660887 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" Oct 14 13:16:33.661045 master-1 kubenswrapper[4740]: I1014 13:16:33.661025 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" Oct 14 13:16:33.661179 master-1 kubenswrapper[4740]: E1014 13:16:33.661159 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" Oct 14 13:16:33.661387 master-1 kubenswrapper[4740]: I1014 13:16:33.661361 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" Oct 14 13:16:33.661547 master-1 kubenswrapper[4740]: E1014 13:16:33.661526 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" Oct 14 13:16:33.661718 master-1 kubenswrapper[4740]: I1014 13:16:33.661691 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" Oct 14 13:16:33.661907 master-1 kubenswrapper[4740]: E1014 13:16:33.661876 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:16:33.662078 master-1 kubenswrapper[4740]: I1014 13:16:33.662054 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:16:33.662638 master-1 kubenswrapper[4740]: I1014 13:16:33.662539 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver" Oct 14 13:16:33.662823 master-1 kubenswrapper[4740]: I1014 13:16:33.662800 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-insecure-readyz" Oct 14 13:16:33.663031 master-1 kubenswrapper[4740]: I1014 13:16:33.663009 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:16:33.663172 master-1 kubenswrapper[4740]: I1014 13:16:33.663151 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-cert-syncer" Oct 14 13:16:33.663363 master-1 kubenswrapper[4740]: I1014 13:16:33.663335 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b1362996d1e0c2cea0bee73eb18468" containerName="kube-apiserver-check-endpoints" Oct 14 13:16:33.773789 master-1 kubenswrapper[4740]: I1014 13:16:33.773656 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.774082 master-1 kubenswrapper[4740]: I1014 13:16:33.773857 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.774383 master-1 kubenswrapper[4740]: I1014 13:16:33.774191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.779168 master-1 kubenswrapper[4740]: I1014 13:16:33.779039 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:33.779168 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:33.779168 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:33.779168 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:33.780012 master-1 kubenswrapper[4740]: I1014 13:16:33.779176 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:33.876030 master-1 kubenswrapper[4740]: I1014 13:16:33.875962 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.876209 master-1 kubenswrapper[4740]: I1014 13:16:33.876111 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.876209 master-1 kubenswrapper[4740]: I1014 13:16:33.876138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.876426 master-1 kubenswrapper[4740]: I1014 13:16:33.876274 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.876426 master-1 kubenswrapper[4740]: I1014 13:16:33.876330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.876426 master-1 kubenswrapper[4740]: I1014 13:16:33.876256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:16:33.897083 master-1 kubenswrapper[4740]: I1014 13:16:33.897006 4740 generic.go:334] "Generic (PLEG): container finished" podID="c165323a-2806-46b2-b073-0dc58b978bc1" containerID="fc6ed49f7d7681e175f0dbe0ce31e2f5ed9664eb3558efb48080580f7bec09c1" exitCode=0 Oct 14 13:16:33.897345 master-1 kubenswrapper[4740]: I1014 13:16:33.897142 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c165323a-2806-46b2-b073-0dc58b978bc1","Type":"ContainerDied","Data":"fc6ed49f7d7681e175f0dbe0ce31e2f5ed9664eb3558efb48080580f7bec09c1"} Oct 14 13:16:33.902846 master-1 kubenswrapper[4740]: I1014 13:16:33.902765 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_34b1362996d1e0c2cea0bee73eb18468/kube-apiserver-cert-syncer/0.log" Oct 14 13:16:33.903740 master-1 kubenswrapper[4740]: I1014 13:16:33.903674 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="34b1362996d1e0c2cea0bee73eb18468" podUID="e39186c2ebd02622803bdbec6984de2a" Oct 14 13:16:33.904420 master-1 kubenswrapper[4740]: I1014 13:16:33.904352 4740 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641" exitCode=0 Oct 14 13:16:33.904420 master-1 kubenswrapper[4740]: I1014 13:16:33.904387 4740 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41" exitCode=0 Oct 14 13:16:33.904420 master-1 kubenswrapper[4740]: I1014 13:16:33.904402 4740 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac" exitCode=0 Oct 14 13:16:33.904420 master-1 kubenswrapper[4740]: I1014 13:16:33.904411 4740 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64" exitCode=2 Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: I1014 13:16:33.963371 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:33.963548 master-1 kubenswrapper[4740]: I1014 13:16:33.963493 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:34.771814 master-1 kubenswrapper[4740]: I1014 13:16:34.771708 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:34.771814 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:34.771814 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:34.771814 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:34.772411 master-1 kubenswrapper[4740]: I1014 13:16:34.771814 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:35.290125 master-1 kubenswrapper[4740]: I1014 13:16:35.289978 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:16:35.402790 master-1 kubenswrapper[4740]: I1014 13:16:35.402698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c165323a-2806-46b2-b073-0dc58b978bc1-kube-api-access\") pod \"c165323a-2806-46b2-b073-0dc58b978bc1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " Oct 14 13:16:35.403060 master-1 kubenswrapper[4740]: I1014 13:16:35.402869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-var-lock\") pod \"c165323a-2806-46b2-b073-0dc58b978bc1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " Oct 14 13:16:35.403060 master-1 kubenswrapper[4740]: I1014 13:16:35.402963 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-kubelet-dir\") pod \"c165323a-2806-46b2-b073-0dc58b978bc1\" (UID: \"c165323a-2806-46b2-b073-0dc58b978bc1\") " Oct 14 13:16:35.403519 master-1 kubenswrapper[4740]: I1014 13:16:35.403464 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c165323a-2806-46b2-b073-0dc58b978bc1" (UID: "c165323a-2806-46b2-b073-0dc58b978bc1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:16:35.403613 master-1 kubenswrapper[4740]: I1014 13:16:35.403496 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-var-lock" (OuterVolumeSpecName: "var-lock") pod "c165323a-2806-46b2-b073-0dc58b978bc1" (UID: "c165323a-2806-46b2-b073-0dc58b978bc1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:16:35.406439 master-1 kubenswrapper[4740]: I1014 13:16:35.406389 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c165323a-2806-46b2-b073-0dc58b978bc1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c165323a-2806-46b2-b073-0dc58b978bc1" (UID: "c165323a-2806-46b2-b073-0dc58b978bc1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:16:35.504554 master-1 kubenswrapper[4740]: I1014 13:16:35.504446 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:16:35.504554 master-1 kubenswrapper[4740]: I1014 13:16:35.504497 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c165323a-2806-46b2-b073-0dc58b978bc1-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:16:35.504554 master-1 kubenswrapper[4740]: I1014 13:16:35.504512 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c165323a-2806-46b2-b073-0dc58b978bc1-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:16:35.771047 master-1 kubenswrapper[4740]: I1014 13:16:35.770970 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:35.771047 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:35.771047 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:35.771047 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:35.771495 master-1 kubenswrapper[4740]: I1014 13:16:35.771068 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:35.919359 master-1 kubenswrapper[4740]: I1014 13:16:35.919264 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-1" event={"ID":"c165323a-2806-46b2-b073-0dc58b978bc1","Type":"ContainerDied","Data":"f6ce3c40126f852f098680c71a875fbc39568c856c76f5f5fdf498fc0afa8d3e"} Oct 14 13:16:35.919359 master-1 kubenswrapper[4740]: I1014 13:16:35.919339 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ce3c40126f852f098680c71a875fbc39568c856c76f5f5fdf498fc0afa8d3e" Oct 14 13:16:35.919631 master-1 kubenswrapper[4740]: I1014 13:16:35.919429 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-1" Oct 14 13:16:36.771345 master-1 kubenswrapper[4740]: I1014 13:16:36.771276 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:36.771345 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:36.771345 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:36.771345 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:36.772355 master-1 kubenswrapper[4740]: I1014 13:16:36.771367 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:37.771058 master-1 kubenswrapper[4740]: I1014 13:16:37.770970 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:37.771058 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:37.771058 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:37.771058 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:37.772095 master-1 kubenswrapper[4740]: I1014 13:16:37.771076 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:38.770663 master-1 kubenswrapper[4740]: I1014 13:16:38.770544 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:38.770663 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:38.770663 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:38.770663 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:38.770663 master-1 kubenswrapper[4740]: I1014 13:16:38.770657 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: I1014 13:16:38.964162 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:38.964254 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:38.966565 master-1 kubenswrapper[4740]: I1014 13:16:38.964280 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:39.770935 master-1 kubenswrapper[4740]: I1014 13:16:39.770822 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:39.770935 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:39.770935 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:39.770935 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:39.771425 master-1 kubenswrapper[4740]: I1014 13:16:39.770965 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:40.104081 master-1 kubenswrapper[4740]: I1014 13:16:40.103895 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:16:40.771063 master-1 kubenswrapper[4740]: I1014 13:16:40.770945 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:40.771063 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:40.771063 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:40.771063 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:40.771575 master-1 kubenswrapper[4740]: I1014 13:16:40.771079 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:41.771724 master-1 kubenswrapper[4740]: I1014 13:16:41.771603 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:41.771724 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:41.771724 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:41.771724 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:41.772907 master-1 kubenswrapper[4740]: I1014 13:16:41.771726 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:42.770701 master-1 kubenswrapper[4740]: I1014 13:16:42.770599 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:42.770701 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:42.770701 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:42.770701 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:42.770701 master-1 kubenswrapper[4740]: I1014 13:16:42.770700 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:43.771319 master-1 kubenswrapper[4740]: I1014 13:16:43.771136 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:43.771319 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:43.771319 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:43.771319 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:43.771319 master-1 kubenswrapper[4740]: I1014 13:16:43.771266 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: I1014 13:16:43.961966 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:43.962056 master-1 kubenswrapper[4740]: I1014 13:16:43.962036 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:43.963702 master-1 kubenswrapper[4740]: I1014 13:16:43.962110 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: I1014 13:16:43.969576 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:43.969644 master-1 kubenswrapper[4740]: I1014 13:16:43.969638 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:44.771440 master-1 kubenswrapper[4740]: I1014 13:16:44.771328 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:44.771440 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:44.771440 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:44.771440 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:44.772778 master-1 kubenswrapper[4740]: I1014 13:16:44.771454 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:45.770919 master-1 kubenswrapper[4740]: I1014 13:16:45.770803 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:45.770919 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:45.770919 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:45.770919 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:45.771388 master-1 kubenswrapper[4740]: I1014 13:16:45.770917 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:46.050839 master-1 kubenswrapper[4740]: E1014 13:16:46.050605 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podUID="cc579fa5-c1e0-40ed-b1f3-e953a42e74d6" Oct 14 13:16:46.050839 master-1 kubenswrapper[4740]: E1014 13:16:46.050753 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podUID="180ced15-1fb1-464d-85f2-0bcc0d836dab" Oct 14 13:16:46.770708 master-1 kubenswrapper[4740]: I1014 13:16:46.770618 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:46.770708 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:46.770708 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:46.770708 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:46.771178 master-1 kubenswrapper[4740]: I1014 13:16:46.770735 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:46.995914 master-1 kubenswrapper[4740]: I1014 13:16:46.995825 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:16:46.996201 master-1 kubenswrapper[4740]: I1014 13:16:46.995961 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:16:47.497359 master-1 kubenswrapper[4740]: I1014 13:16:47.497214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:16:47.498451 master-1 kubenswrapper[4740]: E1014 13:16:47.497457 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:18:49.497437621 +0000 UTC m=+755.307726960 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:16:47.598274 master-1 kubenswrapper[4740]: I1014 13:16:47.598162 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:16:47.598805 master-1 kubenswrapper[4740]: E1014 13:16:47.598615 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:18:49.59843511 +0000 UTC m=+755.408724539 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:16:47.772425 master-1 kubenswrapper[4740]: I1014 13:16:47.772205 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:47.772425 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:47.772425 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:47.772425 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:47.772425 master-1 kubenswrapper[4740]: I1014 13:16:47.772332 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:48.770761 master-1 kubenswrapper[4740]: I1014 13:16:48.770659 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:48.770761 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:48.770761 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:48.770761 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:48.770761 master-1 kubenswrapper[4740]: I1014 13:16:48.770724 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: I1014 13:16:48.963580 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:48.963666 master-1 kubenswrapper[4740]: I1014 13:16:48.963659 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:49.771608 master-1 kubenswrapper[4740]: I1014 13:16:49.771525 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:49.771608 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:49.771608 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:49.771608 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:49.772594 master-1 kubenswrapper[4740]: I1014 13:16:49.771609 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:50.771433 master-1 kubenswrapper[4740]: I1014 13:16:50.771361 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:50.771433 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:50.771433 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:50.771433 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:50.772308 master-1 kubenswrapper[4740]: I1014 13:16:50.771455 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:51.145535 master-1 kubenswrapper[4740]: I1014 13:16:51.145298 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 13:16:51.771599 master-1 kubenswrapper[4740]: I1014 13:16:51.771513 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:51.771599 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:51.771599 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:51.771599 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:51.772651 master-1 kubenswrapper[4740]: I1014 13:16:51.771607 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:52.771477 master-1 kubenswrapper[4740]: I1014 13:16:52.771421 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:52.771477 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:52.771477 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:52.771477 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:52.771747 master-1 kubenswrapper[4740]: I1014 13:16:52.771491 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:53.773581 master-1 kubenswrapper[4740]: I1014 13:16:53.773482 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:53.773581 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:53.773581 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:53.773581 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:53.774579 master-1 kubenswrapper[4740]: I1014 13:16:53.773584 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: I1014 13:16:53.964184 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:53.964287 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:53.967203 master-1 kubenswrapper[4740]: I1014 13:16:53.964315 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:54.772331 master-1 kubenswrapper[4740]: I1014 13:16:54.772184 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:54.772331 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:54.772331 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:54.772331 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:54.772965 master-1 kubenswrapper[4740]: I1014 13:16:54.772396 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:55.771471 master-1 kubenswrapper[4740]: I1014 13:16:55.771373 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:55.771471 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:55.771471 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:55.771471 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:55.772523 master-1 kubenswrapper[4740]: I1014 13:16:55.771492 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:56.770161 master-1 kubenswrapper[4740]: I1014 13:16:56.770046 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:56.770161 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:56.770161 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:56.770161 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:56.770161 master-1 kubenswrapper[4740]: I1014 13:16:56.770151 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:57.770960 master-1 kubenswrapper[4740]: I1014 13:16:57.770855 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:57.770960 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:57.770960 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:57.770960 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:57.770960 master-1 kubenswrapper[4740]: I1014 13:16:57.770954 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:58.711417 master-1 kubenswrapper[4740]: I1014 13:16:58.711318 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-mzrkb_ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67/assisted-installer-controller/0.log" Oct 14 13:16:58.771264 master-1 kubenswrapper[4740]: I1014 13:16:58.771186 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:58.771264 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:58.771264 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:58.771264 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:58.772073 master-1 kubenswrapper[4740]: I1014 13:16:58.771290 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: I1014 13:16:58.962145 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:16:58.962297 master-1 kubenswrapper[4740]: I1014 13:16:58.962253 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:16:59.770696 master-1 kubenswrapper[4740]: I1014 13:16:59.770549 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:16:59.770696 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:16:59.770696 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:16:59.770696 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:16:59.771126 master-1 kubenswrapper[4740]: I1014 13:16:59.770722 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:00.771241 master-1 kubenswrapper[4740]: I1014 13:17:00.771157 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:00.771241 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:17:00.771241 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:17:00.771241 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:17:00.771972 master-1 kubenswrapper[4740]: I1014 13:17:00.771277 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:01.771204 master-1 kubenswrapper[4740]: I1014 13:17:01.771081 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:01.771204 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:17:01.771204 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:17:01.771204 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:17:01.771204 master-1 kubenswrapper[4740]: I1014 13:17:01.771190 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:02.771616 master-1 kubenswrapper[4740]: I1014 13:17:02.771490 4740 patch_prober.go:28] interesting pod/router-default-5ddb89f76-xf924 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:02.771616 master-1 kubenswrapper[4740]: [-]has-synced failed: reason withheld Oct 14 13:17:02.771616 master-1 kubenswrapper[4740]: [+]process-running ok Oct 14 13:17:02.771616 master-1 kubenswrapper[4740]: healthz check failed Oct 14 13:17:02.771616 master-1 kubenswrapper[4740]: I1014 13:17:02.771608 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:02.772898 master-1 kubenswrapper[4740]: I1014 13:17:02.771673 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:17:02.772898 master-1 kubenswrapper[4740]: I1014 13:17:02.772359 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"574dcc96f027c302746e71fa1b6d9e59728f15441bda5dda38c7fb4f50571750"} pod="openshift-ingress/router-default-5ddb89f76-xf924" containerMessage="Container router failed startup probe, will be restarted" Oct 14 13:17:02.772898 master-1 kubenswrapper[4740]: I1014 13:17:02.772401 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5ddb89f76-xf924" podUID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerName="router" containerID="cri-o://574dcc96f027c302746e71fa1b6d9e59728f15441bda5dda38c7fb4f50571750" gracePeriod=3600 Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: I1014 13:17:03.960879 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:03.960983 master-1 kubenswrapper[4740]: I1014 13:17:03.960966 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: I1014 13:17:08.962172 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:08.962287 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:08.965783 master-1 kubenswrapper[4740]: I1014 13:17:08.962287 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:09.843461 master-1 kubenswrapper[4740]: I1014 13:17:09.843377 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-1"] Oct 14 13:17:09.843755 master-1 kubenswrapper[4740]: E1014 13:17:09.843721 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c165323a-2806-46b2-b073-0dc58b978bc1" containerName="installer" Oct 14 13:17:09.843800 master-1 kubenswrapper[4740]: I1014 13:17:09.843755 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c165323a-2806-46b2-b073-0dc58b978bc1" containerName="installer" Oct 14 13:17:09.843997 master-1 kubenswrapper[4740]: I1014 13:17:09.843966 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c165323a-2806-46b2-b073-0dc58b978bc1" containerName="installer" Oct 14 13:17:09.844728 master-1 kubenswrapper[4740]: I1014 13:17:09.844691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:09.847980 master-1 kubenswrapper[4740]: I1014 13:17:09.847932 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-bm6wx" Oct 14 13:17:09.854961 master-1 kubenswrapper[4740]: I1014 13:17:09.854865 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-1"] Oct 14 13:17:10.033320 master-1 kubenswrapper[4740]: I1014 13:17:10.033223 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.033901 master-1 kubenswrapper[4740]: I1014 13:17:10.033442 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-var-lock\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.033901 master-1 kubenswrapper[4740]: I1014 13:17:10.033634 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28eeaa8e-ec52-426b-a893-ccce40030c9b-kube-api-access\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.103688 master-1 kubenswrapper[4740]: I1014 13:17:10.103534 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-b6pv4_f4b808ea-786b-4ff6-a7e8-73b0c9ac8157/machine-config-server/0.log" Oct 14 13:17:10.135067 master-1 kubenswrapper[4740]: I1014 13:17:10.134991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.135283 master-1 kubenswrapper[4740]: I1014 13:17:10.135172 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.135283 master-1 kubenswrapper[4740]: I1014 13:17:10.135199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-var-lock\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.135377 master-1 kubenswrapper[4740]: I1014 13:17:10.135293 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-var-lock\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.135377 master-1 kubenswrapper[4740]: I1014 13:17:10.135340 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28eeaa8e-ec52-426b-a893-ccce40030c9b-kube-api-access\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.158627 master-1 kubenswrapper[4740]: I1014 13:17:10.158549 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28eeaa8e-ec52-426b-a893-ccce40030c9b-kube-api-access\") pod \"installer-6-master-1\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.214786 master-1 kubenswrapper[4740]: I1014 13:17:10.214707 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:10.688331 master-1 kubenswrapper[4740]: I1014 13:17:10.688177 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-1"] Oct 14 13:17:10.700337 master-1 kubenswrapper[4740]: W1014 13:17:10.700173 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod28eeaa8e_ec52_426b_a893_ccce40030c9b.slice/crio-e98634e80f6f1702befbf8228b0dc7e62432b2f27d61cb0844328e77bdd89567 WatchSource:0}: Error finding container e98634e80f6f1702befbf8228b0dc7e62432b2f27d61cb0844328e77bdd89567: Status 404 returned error can't find the container with id e98634e80f6f1702befbf8228b0dc7e62432b2f27d61cb0844328e77bdd89567 Oct 14 13:17:11.160890 master-1 kubenswrapper[4740]: I1014 13:17:11.160805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"28eeaa8e-ec52-426b-a893-ccce40030c9b","Type":"ContainerStarted","Data":"e98634e80f6f1702befbf8228b0dc7e62432b2f27d61cb0844328e77bdd89567"} Oct 14 13:17:12.169613 master-1 kubenswrapper[4740]: I1014 13:17:12.169510 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"28eeaa8e-ec52-426b-a893-ccce40030c9b","Type":"ContainerStarted","Data":"6dea2941203d0f0a4e84ef0b965f18ee416a244f4609e917245605c985038897"} Oct 14 13:17:12.189953 master-1 kubenswrapper[4740]: I1014 13:17:12.189840 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-1" podStartSLOduration=3.189814108 podStartE2EDuration="3.189814108s" podCreationTimestamp="2025-10-14 13:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.189024098 +0000 UTC m=+657.999313457" watchObservedRunningTime="2025-10-14 13:17:12.189814108 +0000 UTC m=+658.000103457" Oct 14 13:17:13.450822 master-1 kubenswrapper[4740]: I1014 13:17:13.450689 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-1"] Oct 14 13:17:13.453889 master-1 kubenswrapper[4740]: I1014 13:17:13.453810 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.460028 master-1 kubenswrapper[4740]: I1014 13:17:13.458686 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-sdwrm" Oct 14 13:17:13.463666 master-1 kubenswrapper[4740]: I1014 13:17:13.463559 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-1"] Oct 14 13:17:13.584276 master-1 kubenswrapper[4740]: I1014 13:17:13.584142 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.584276 master-1 kubenswrapper[4740]: I1014 13:17:13.584253 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-var-lock\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.584621 master-1 kubenswrapper[4740]: I1014 13:17:13.584291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/530f21ca-695c-4cd9-a086-08aff304d820-kube-api-access\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.685648 master-1 kubenswrapper[4740]: I1014 13:17:13.685591 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.685903 master-1 kubenswrapper[4740]: I1014 13:17:13.685671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-var-lock\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.685903 master-1 kubenswrapper[4740]: I1014 13:17:13.685701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/530f21ca-695c-4cd9-a086-08aff304d820-kube-api-access\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.685903 master-1 kubenswrapper[4740]: I1014 13:17:13.685766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.685903 master-1 kubenswrapper[4740]: I1014 13:17:13.685846 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-var-lock\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.718442 master-1 kubenswrapper[4740]: I1014 13:17:13.718287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/530f21ca-695c-4cd9-a086-08aff304d820-kube-api-access\") pod \"installer-5-master-1\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.781719 master-1 kubenswrapper[4740]: I1014 13:17:13.781655 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: I1014 13:17:13.963060 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:13.963989 master-1 kubenswrapper[4740]: I1014 13:17:13.963122 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:14.288313 master-1 kubenswrapper[4740]: I1014 13:17:14.288257 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-1"] Oct 14 13:17:14.289648 master-1 kubenswrapper[4740]: W1014 13:17:14.289604 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod530f21ca_695c_4cd9_a086_08aff304d820.slice/crio-1c983efe1d51c447ba7c794d47a2523a7b284105b8465ff1cd8fd405b8c7be08 WatchSource:0}: Error finding container 1c983efe1d51c447ba7c794d47a2523a7b284105b8465ff1cd8fd405b8c7be08: Status 404 returned error can't find the container with id 1c983efe1d51c447ba7c794d47a2523a7b284105b8465ff1cd8fd405b8c7be08 Oct 14 13:17:15.192275 master-1 kubenswrapper[4740]: I1014 13:17:15.192180 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"530f21ca-695c-4cd9-a086-08aff304d820","Type":"ContainerStarted","Data":"a2afbf475c2a8aa10639794ae9b15dc68c5bc36a3baba6a4fe552561f4a3d5fe"} Oct 14 13:17:15.192275 master-1 kubenswrapper[4740]: I1014 13:17:15.192241 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"530f21ca-695c-4cd9-a086-08aff304d820","Type":"ContainerStarted","Data":"1c983efe1d51c447ba7c794d47a2523a7b284105b8465ff1cd8fd405b8c7be08"} Oct 14 13:17:15.233418 master-1 kubenswrapper[4740]: I1014 13:17:15.233191 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-1" podStartSLOduration=2.233164027 podStartE2EDuration="2.233164027s" podCreationTimestamp="2025-10-14 13:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:15.221698104 +0000 UTC m=+661.031987473" watchObservedRunningTime="2025-10-14 13:17:15.233164027 +0000 UTC m=+661.043453406" Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: I1014 13:17:18.961216 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:18.961333 master-1 kubenswrapper[4740]: I1014 13:17:18.961320 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: I1014 13:17:23.963337 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:23.963422 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:23.965240 master-1 kubenswrapper[4740]: I1014 13:17:23.963470 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:25.766103 master-1 kubenswrapper[4740]: E1014 13:17:25.766000 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:17:25.766980 master-1 kubenswrapper[4740]: E1014 13:17:25.766163 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:19:27.766122771 +0000 UTC m=+793.576412180 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: I1014 13:17:28.963199 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:28.963301 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:28.965115 master-1 kubenswrapper[4740]: I1014 13:17:28.963316 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:31.325752 master-1 kubenswrapper[4740]: I1014 13:17:31.325332 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager/0.log" Oct 14 13:17:31.325752 master-1 kubenswrapper[4740]: I1014 13:17:31.325408 4740 generic.go:334] "Generic (PLEG): container finished" podID="307e6b842bfe51f420cddfc39289bc3c" containerID="3f0bc4dbe3b6e7ad165b03d3b977fbdd2911734cf101d9169ff05b295df5788b" exitCode=1 Oct 14 13:17:31.325752 master-1 kubenswrapper[4740]: I1014 13:17:31.325489 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerDied","Data":"3f0bc4dbe3b6e7ad165b03d3b977fbdd2911734cf101d9169ff05b295df5788b"} Oct 14 13:17:31.327190 master-1 kubenswrapper[4740]: I1014 13:17:31.326378 4740 scope.go:117] "RemoveContainer" containerID="3f0bc4dbe3b6e7ad165b03d3b977fbdd2911734cf101d9169ff05b295df5788b" Oct 14 13:17:31.872286 master-1 kubenswrapper[4740]: I1014 13:17:31.872168 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:31.872690 master-1 kubenswrapper[4740]: I1014 13:17:31.872329 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:31.872792 master-1 kubenswrapper[4740]: I1014 13:17:31.872751 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:32.338498 master-1 kubenswrapper[4740]: I1014 13:17:32.338403 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager/0.log" Oct 14 13:17:32.339400 master-1 kubenswrapper[4740]: I1014 13:17:32.338515 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"307e6b842bfe51f420cddfc39289bc3c","Type":"ContainerStarted","Data":"6c49b12e94298058c3fe7e52d9debfe9322d63d2cbb98a0a9d0c95aba6f944b3"} Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: I1014 13:17:33.965092 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:33.965159 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:33.968025 master-1 kubenswrapper[4740]: I1014 13:17:33.965185 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: I1014 13:17:38.962474 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:38.962589 master-1 kubenswrapper[4740]: I1014 13:17:38.962566 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:41.872917 master-1 kubenswrapper[4740]: I1014 13:17:41.872855 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:41.872917 master-1 kubenswrapper[4740]: I1014 13:17:41.872913 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:41.880745 master-1 kubenswrapper[4740]: I1014 13:17:41.880693 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:42.401622 master-1 kubenswrapper[4740]: I1014 13:17:42.401564 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:42.521713 master-1 kubenswrapper[4740]: I1014 13:17:42.521583 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:17:42.521998 master-1 kubenswrapper[4740]: I1014 13:17:42.521956 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" containerID="cri-o://7ed5379248b9c8e16850c8587a413da8fce2a5280c56803e5377b6801674d1a9" gracePeriod=30 Oct 14 13:17:42.522063 master-1 kubenswrapper[4740]: I1014 13:17:42.521998 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" containerID="cri-o://6fc564eebe0d572c7e176e3aca3156a0fc412212ac1fc3f10e1293f2dcc05d04" gracePeriod=30 Oct 14 13:17:42.522105 master-1 kubenswrapper[4740]: I1014 13:17:42.521974 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" containerID="cri-o://c237848c47768b8806a19f783f2d47f481ae5a551fb55ae77977077026c61294" gracePeriod=30 Oct 14 13:17:42.524166 master-1 kubenswrapper[4740]: I1014 13:17:42.524144 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:17:42.524487 master-1 kubenswrapper[4740]: E1014 13:17:42.524469 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" Oct 14 13:17:42.524577 master-1 kubenswrapper[4740]: I1014 13:17:42.524565 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" Oct 14 13:17:42.524693 master-1 kubenswrapper[4740]: E1014 13:17:42.524678 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" Oct 14 13:17:42.524771 master-1 kubenswrapper[4740]: I1014 13:17:42.524759 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" Oct 14 13:17:42.524847 master-1 kubenswrapper[4740]: E1014 13:17:42.524836 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" Oct 14 13:17:42.524937 master-1 kubenswrapper[4740]: I1014 13:17:42.524925 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" Oct 14 13:17:42.525017 master-1 kubenswrapper[4740]: E1014 13:17:42.525006 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="wait-for-host-port" Oct 14 13:17:42.525087 master-1 kubenswrapper[4740]: I1014 13:17:42.525076 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="wait-for-host-port" Oct 14 13:17:42.525541 master-1 kubenswrapper[4740]: E1014 13:17:42.525527 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61df698d34d049669621b2249bfe758" containerName="wait-for-host-port" Oct 14 13:17:42.525621 master-1 kubenswrapper[4740]: I1014 13:17:42.525610 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61df698d34d049669621b2249bfe758" containerName="wait-for-host-port" Oct 14 13:17:42.525816 master-1 kubenswrapper[4740]: I1014 13:17:42.525802 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler" Oct 14 13:17:42.525913 master-1 kubenswrapper[4740]: I1014 13:17:42.525900 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-recovery-controller" Oct 14 13:17:42.525990 master-1 kubenswrapper[4740]: I1014 13:17:42.525979 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61df698d34d049669621b2249bfe758" containerName="kube-scheduler-cert-syncer" Oct 14 13:17:42.606930 master-1 kubenswrapper[4740]: I1014 13:17:42.606820 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:17:42.607376 master-1 kubenswrapper[4740]: I1014 13:17:42.607092 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:17:42.708596 master-1 kubenswrapper[4740]: I1014 13:17:42.708461 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:17:42.708768 master-1 kubenswrapper[4740]: I1014 13:17:42.708632 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:17:42.708850 master-1 kubenswrapper[4740]: I1014 13:17:42.708793 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-resource-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:17:42.708943 master-1 kubenswrapper[4740]: I1014 13:17:42.708883 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1ffd3b5548bcf48fce7bfb9a8c802165-cert-dir\") pod \"openshift-kube-scheduler-master-1\" (UID: \"1ffd3b5548bcf48fce7bfb9a8c802165\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:17:43.408683 master-1 kubenswrapper[4740]: I1014 13:17:43.408556 4740 generic.go:334] "Generic (PLEG): container finished" podID="28eeaa8e-ec52-426b-a893-ccce40030c9b" containerID="6dea2941203d0f0a4e84ef0b965f18ee416a244f4609e917245605c985038897" exitCode=0 Oct 14 13:17:43.409709 master-1 kubenswrapper[4740]: I1014 13:17:43.408858 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"28eeaa8e-ec52-426b-a893-ccce40030c9b","Type":"ContainerDied","Data":"6dea2941203d0f0a4e84ef0b965f18ee416a244f4609e917245605c985038897"} Oct 14 13:17:43.413691 master-1 kubenswrapper[4740]: I1014 13:17:43.413607 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 14 13:17:43.415594 master-1 kubenswrapper[4740]: I1014 13:17:43.415506 4740 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="c237848c47768b8806a19f783f2d47f481ae5a551fb55ae77977077026c61294" exitCode=0 Oct 14 13:17:43.415594 master-1 kubenswrapper[4740]: I1014 13:17:43.415579 4740 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="6fc564eebe0d572c7e176e3aca3156a0fc412212ac1fc3f10e1293f2dcc05d04" exitCode=2 Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: I1014 13:17:43.960809 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:43.960864 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:43.963718 master-1 kubenswrapper[4740]: I1014 13:17:43.963669 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:44.762709 master-1 kubenswrapper[4740]: I1014 13:17:44.762644 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:44.842590 master-1 kubenswrapper[4740]: I1014 13:17:44.842520 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-kubelet-dir\") pod \"28eeaa8e-ec52-426b-a893-ccce40030c9b\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " Oct 14 13:17:44.842798 master-1 kubenswrapper[4740]: I1014 13:17:44.842606 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28eeaa8e-ec52-426b-a893-ccce40030c9b-kube-api-access\") pod \"28eeaa8e-ec52-426b-a893-ccce40030c9b\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " Oct 14 13:17:44.842798 master-1 kubenswrapper[4740]: I1014 13:17:44.842676 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-var-lock\") pod \"28eeaa8e-ec52-426b-a893-ccce40030c9b\" (UID: \"28eeaa8e-ec52-426b-a893-ccce40030c9b\") " Oct 14 13:17:44.843318 master-1 kubenswrapper[4740]: I1014 13:17:44.843281 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-var-lock" (OuterVolumeSpecName: "var-lock") pod "28eeaa8e-ec52-426b-a893-ccce40030c9b" (UID: "28eeaa8e-ec52-426b-a893-ccce40030c9b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:44.843370 master-1 kubenswrapper[4740]: I1014 13:17:44.843338 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "28eeaa8e-ec52-426b-a893-ccce40030c9b" (UID: "28eeaa8e-ec52-426b-a893-ccce40030c9b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:44.847700 master-1 kubenswrapper[4740]: I1014 13:17:44.847647 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28eeaa8e-ec52-426b-a893-ccce40030c9b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "28eeaa8e-ec52-426b-a893-ccce40030c9b" (UID: "28eeaa8e-ec52-426b-a893-ccce40030c9b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:17:44.944099 master-1 kubenswrapper[4740]: I1014 13:17:44.944035 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:44.944099 master-1 kubenswrapper[4740]: I1014 13:17:44.944077 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28eeaa8e-ec52-426b-a893-ccce40030c9b-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:44.944099 master-1 kubenswrapper[4740]: I1014 13:17:44.944089 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/28eeaa8e-ec52-426b-a893-ccce40030c9b-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:45.432377 master-1 kubenswrapper[4740]: I1014 13:17:45.432288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-1" event={"ID":"28eeaa8e-ec52-426b-a893-ccce40030c9b","Type":"ContainerDied","Data":"e98634e80f6f1702befbf8228b0dc7e62432b2f27d61cb0844328e77bdd89567"} Oct 14 13:17:45.432377 master-1 kubenswrapper[4740]: I1014 13:17:45.432355 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e98634e80f6f1702befbf8228b0dc7e62432b2f27d61cb0844328e77bdd89567" Oct 14 13:17:45.432377 master-1 kubenswrapper[4740]: I1014 13:17:45.432361 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-1" Oct 14 13:17:47.441490 master-1 kubenswrapper[4740]: I1014 13:17:47.441373 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:17:47.441490 master-1 kubenswrapper[4740]: I1014 13:17:47.441479 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:17:47.475026 master-1 kubenswrapper[4740]: I1014 13:17:47.474930 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:17:47.475483 master-1 kubenswrapper[4740]: I1014 13:17:47.475407 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="cluster-policy-controller" containerID="cri-o://8f7f6048dbdc1a310a3e5e5e10294d23b83452d6cb4d457ef27b2ca284c65673" gracePeriod=30 Oct 14 13:17:47.475644 master-1 kubenswrapper[4740]: I1014 13:17:47.475539 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://e25a090fbeaf10ae15d12c1a5a4fc4c7f9e4949adb35ef26373fca7108a10da2" gracePeriod=30 Oct 14 13:17:47.475753 master-1 kubenswrapper[4740]: I1014 13:17:47.475505 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://1454c7db3bd11bf75bea8fa684ae07789621749144f0ddb7b02fe3b66731d7cd" gracePeriod=30 Oct 14 13:17:47.475856 master-1 kubenswrapper[4740]: I1014 13:17:47.475557 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" containerID="cri-o://6c49b12e94298058c3fe7e52d9debfe9322d63d2cbb98a0a9d0c95aba6f944b3" gracePeriod=30 Oct 14 13:17:47.477170 master-1 kubenswrapper[4740]: I1014 13:17:47.476929 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:17:47.477698 master-1 kubenswrapper[4740]: E1014 13:17:47.477645 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-cert-syncer" Oct 14 13:17:47.477698 master-1 kubenswrapper[4740]: I1014 13:17:47.477679 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-cert-syncer" Oct 14 13:17:47.477850 master-1 kubenswrapper[4740]: E1014 13:17:47.477742 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28eeaa8e-ec52-426b-a893-ccce40030c9b" containerName="installer" Oct 14 13:17:47.477850 master-1 kubenswrapper[4740]: I1014 13:17:47.477760 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="28eeaa8e-ec52-426b-a893-ccce40030c9b" containerName="installer" Oct 14 13:17:47.477850 master-1 kubenswrapper[4740]: E1014 13:17:47.477776 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" Oct 14 13:17:47.477850 master-1 kubenswrapper[4740]: I1014 13:17:47.477823 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" Oct 14 13:17:47.477850 master-1 kubenswrapper[4740]: E1014 13:17:47.477842 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-recovery-controller" Oct 14 13:17:47.477850 master-1 kubenswrapper[4740]: I1014 13:17:47.477855 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-recovery-controller" Oct 14 13:17:47.478203 master-1 kubenswrapper[4740]: E1014 13:17:47.477872 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="cluster-policy-controller" Oct 14 13:17:47.478203 master-1 kubenswrapper[4740]: I1014 13:17:47.477919 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="cluster-policy-controller" Oct 14 13:17:47.478203 master-1 kubenswrapper[4740]: E1014 13:17:47.477942 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" Oct 14 13:17:47.478203 master-1 kubenswrapper[4740]: I1014 13:17:47.477953 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" Oct 14 13:17:47.478635 master-1 kubenswrapper[4740]: I1014 13:17:47.478548 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-recovery-controller" Oct 14 13:17:47.478635 master-1 kubenswrapper[4740]: I1014 13:17:47.478623 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="cluster-policy-controller" Oct 14 13:17:47.478784 master-1 kubenswrapper[4740]: I1014 13:17:47.478645 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" Oct 14 13:17:47.478784 master-1 kubenswrapper[4740]: I1014 13:17:47.478697 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="28eeaa8e-ec52-426b-a893-ccce40030c9b" containerName="installer" Oct 14 13:17:47.478784 master-1 kubenswrapper[4740]: I1014 13:17:47.478719 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager-cert-syncer" Oct 14 13:17:47.479337 master-1 kubenswrapper[4740]: I1014 13:17:47.479275 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="307e6b842bfe51f420cddfc39289bc3c" containerName="kube-controller-manager" Oct 14 13:17:47.587347 master-1 kubenswrapper[4740]: I1014 13:17:47.587225 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.587636 master-1 kubenswrapper[4740]: I1014 13:17:47.587597 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.689202 master-1 kubenswrapper[4740]: I1014 13:17:47.689106 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.689427 master-1 kubenswrapper[4740]: I1014 13:17:47.689309 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.689427 master-1 kubenswrapper[4740]: I1014 13:17:47.689322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.689427 master-1 kubenswrapper[4740]: I1014 13:17:47.689382 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.783891 master-1 kubenswrapper[4740]: I1014 13:17:47.783773 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager-cert-syncer/0.log" Oct 14 13:17:47.785557 master-1 kubenswrapper[4740]: I1014 13:17:47.785506 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager/0.log" Oct 14 13:17:47.785710 master-1 kubenswrapper[4740]: I1014 13:17:47.785663 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:47.892052 master-1 kubenswrapper[4740]: I1014 13:17:47.891955 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-resource-dir\") pod \"307e6b842bfe51f420cddfc39289bc3c\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " Oct 14 13:17:47.892052 master-1 kubenswrapper[4740]: I1014 13:17:47.892018 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-cert-dir\") pod \"307e6b842bfe51f420cddfc39289bc3c\" (UID: \"307e6b842bfe51f420cddfc39289bc3c\") " Oct 14 13:17:47.892428 master-1 kubenswrapper[4740]: I1014 13:17:47.892159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "307e6b842bfe51f420cddfc39289bc3c" (UID: "307e6b842bfe51f420cddfc39289bc3c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:47.892428 master-1 kubenswrapper[4740]: I1014 13:17:47.892279 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "307e6b842bfe51f420cddfc39289bc3c" (UID: "307e6b842bfe51f420cddfc39289bc3c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:47.892860 master-1 kubenswrapper[4740]: I1014 13:17:47.892780 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:47.892860 master-1 kubenswrapper[4740]: I1014 13:17:47.892855 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/307e6b842bfe51f420cddfc39289bc3c-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:48.456651 master-1 kubenswrapper[4740]: I1014 13:17:48.456541 4740 generic.go:334] "Generic (PLEG): container finished" podID="530f21ca-695c-4cd9-a086-08aff304d820" containerID="a2afbf475c2a8aa10639794ae9b15dc68c5bc36a3baba6a4fe552561f4a3d5fe" exitCode=0 Oct 14 13:17:48.456651 master-1 kubenswrapper[4740]: I1014 13:17:48.456599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"530f21ca-695c-4cd9-a086-08aff304d820","Type":"ContainerDied","Data":"a2afbf475c2a8aa10639794ae9b15dc68c5bc36a3baba6a4fe552561f4a3d5fe"} Oct 14 13:17:48.460378 master-1 kubenswrapper[4740]: I1014 13:17:48.460349 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager-cert-syncer/0.log" Oct 14 13:17:48.461636 master-1 kubenswrapper[4740]: I1014 13:17:48.461615 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager/0.log" Oct 14 13:17:48.461790 master-1 kubenswrapper[4740]: I1014 13:17:48.461761 4740 generic.go:334] "Generic (PLEG): container finished" podID="307e6b842bfe51f420cddfc39289bc3c" containerID="6c49b12e94298058c3fe7e52d9debfe9322d63d2cbb98a0a9d0c95aba6f944b3" exitCode=0 Oct 14 13:17:48.461949 master-1 kubenswrapper[4740]: I1014 13:17:48.461925 4740 generic.go:334] "Generic (PLEG): container finished" podID="307e6b842bfe51f420cddfc39289bc3c" containerID="1454c7db3bd11bf75bea8fa684ae07789621749144f0ddb7b02fe3b66731d7cd" exitCode=0 Oct 14 13:17:48.462066 master-1 kubenswrapper[4740]: I1014 13:17:48.462048 4740 generic.go:334] "Generic (PLEG): container finished" podID="307e6b842bfe51f420cddfc39289bc3c" containerID="e25a090fbeaf10ae15d12c1a5a4fc4c7f9e4949adb35ef26373fca7108a10da2" exitCode=2 Oct 14 13:17:48.462176 master-1 kubenswrapper[4740]: I1014 13:17:48.462156 4740 generic.go:334] "Generic (PLEG): container finished" podID="307e6b842bfe51f420cddfc39289bc3c" containerID="8f7f6048dbdc1a310a3e5e5e10294d23b83452d6cb4d457ef27b2ca284c65673" exitCode=0 Oct 14 13:17:48.462342 master-1 kubenswrapper[4740]: I1014 13:17:48.462322 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b67caa3ed969288705757561d3901f7a1269b03a91cc391c1fedbca5e3e2c36a" Oct 14 13:17:48.462470 master-1 kubenswrapper[4740]: I1014 13:17:48.461931 4740 scope.go:117] "RemoveContainer" containerID="3f0bc4dbe3b6e7ad165b03d3b977fbdd2911734cf101d9169ff05b295df5788b" Oct 14 13:17:48.462663 master-1 kubenswrapper[4740]: I1014 13:17:48.461930 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:48.954176 master-1 kubenswrapper[4740]: I1014 13:17:48.954094 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307e6b842bfe51f420cddfc39289bc3c" path="/var/lib/kubelet/pods/307e6b842bfe51f420cddfc39289bc3c/volumes" Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: I1014 13:17:48.963469 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:17:48.963552 master-1 kubenswrapper[4740]: I1014 13:17:48.963553 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:49.473727 master-1 kubenswrapper[4740]: I1014 13:17:49.473649 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_307e6b842bfe51f420cddfc39289bc3c/kube-controller-manager-cert-syncer/0.log" Oct 14 13:17:49.477614 master-1 kubenswrapper[4740]: I1014 13:17:49.477569 4740 generic.go:334] "Generic (PLEG): container finished" podID="b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28" containerID="574dcc96f027c302746e71fa1b6d9e59728f15441bda5dda38c7fb4f50571750" exitCode=0 Oct 14 13:17:49.477831 master-1 kubenswrapper[4740]: I1014 13:17:49.477679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerDied","Data":"574dcc96f027c302746e71fa1b6d9e59728f15441bda5dda38c7fb4f50571750"} Oct 14 13:17:49.477933 master-1 kubenswrapper[4740]: I1014 13:17:49.477849 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5ddb89f76-xf924" event={"ID":"b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28","Type":"ContainerStarted","Data":"1a5a34fbe571dbe7cd7971349bd65513310a20dd4d30a8336bf0775c38822b99"} Oct 14 13:17:49.477933 master-1 kubenswrapper[4740]: I1014 13:17:49.477881 4740 scope.go:117] "RemoveContainer" containerID="f8c9d5de8cdc8e09521c2a264d3a5c111dd776eb29cce79eace0db63652de74f" Oct 14 13:17:49.768960 master-1 kubenswrapper[4740]: I1014 13:17:49.768856 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:17:49.772734 master-1 kubenswrapper[4740]: I1014 13:17:49.772681 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:17:49.850362 master-1 kubenswrapper[4740]: I1014 13:17:49.850296 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:49.923182 master-1 kubenswrapper[4740]: I1014 13:17:49.923121 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/530f21ca-695c-4cd9-a086-08aff304d820-kube-api-access\") pod \"530f21ca-695c-4cd9-a086-08aff304d820\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " Oct 14 13:17:49.923660 master-1 kubenswrapper[4740]: I1014 13:17:49.923631 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-kubelet-dir\") pod \"530f21ca-695c-4cd9-a086-08aff304d820\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " Oct 14 13:17:49.923729 master-1 kubenswrapper[4740]: I1014 13:17:49.923676 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-var-lock\") pod \"530f21ca-695c-4cd9-a086-08aff304d820\" (UID: \"530f21ca-695c-4cd9-a086-08aff304d820\") " Oct 14 13:17:49.923847 master-1 kubenswrapper[4740]: I1014 13:17:49.923796 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "530f21ca-695c-4cd9-a086-08aff304d820" (UID: "530f21ca-695c-4cd9-a086-08aff304d820"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:49.923965 master-1 kubenswrapper[4740]: I1014 13:17:49.923934 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-var-lock" (OuterVolumeSpecName: "var-lock") pod "530f21ca-695c-4cd9-a086-08aff304d820" (UID: "530f21ca-695c-4cd9-a086-08aff304d820"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:49.924026 master-1 kubenswrapper[4740]: I1014 13:17:49.924007 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:49.924026 master-1 kubenswrapper[4740]: I1014 13:17:49.924019 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/530f21ca-695c-4cd9-a086-08aff304d820-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:49.926096 master-1 kubenswrapper[4740]: I1014 13:17:49.926038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530f21ca-695c-4cd9-a086-08aff304d820-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "530f21ca-695c-4cd9-a086-08aff304d820" (UID: "530f21ca-695c-4cd9-a086-08aff304d820"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:17:50.025164 master-1 kubenswrapper[4740]: I1014 13:17:50.024945 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/530f21ca-695c-4cd9-a086-08aff304d820-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:50.485584 master-1 kubenswrapper[4740]: I1014 13:17:50.485532 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-1" event={"ID":"530f21ca-695c-4cd9-a086-08aff304d820","Type":"ContainerDied","Data":"1c983efe1d51c447ba7c794d47a2523a7b284105b8465ff1cd8fd405b8c7be08"} Oct 14 13:17:50.485584 master-1 kubenswrapper[4740]: I1014 13:17:50.485580 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c983efe1d51c447ba7c794d47a2523a7b284105b8465ff1cd8fd405b8c7be08" Oct 14 13:17:50.485584 master-1 kubenswrapper[4740]: I1014 13:17:50.485581 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-1" Oct 14 13:17:50.488464 master-1 kubenswrapper[4740]: I1014 13:17:50.488413 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:17:50.490396 master-1 kubenswrapper[4740]: I1014 13:17:50.490361 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5ddb89f76-xf924" Oct 14 13:17:51.706990 master-1 kubenswrapper[4740]: I1014 13:17:51.706906 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:17:51.706990 master-1 kubenswrapper[4740]: I1014 13:17:51.706968 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:17:51.869877 master-1 kubenswrapper[4740]: I1014 13:17:51.869822 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_34b1362996d1e0c2cea0bee73eb18468/kube-apiserver-cert-syncer/0.log" Oct 14 13:17:51.870675 master-1 kubenswrapper[4740]: I1014 13:17:51.870637 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:17:51.956791 master-1 kubenswrapper[4740]: I1014 13:17:51.956695 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") pod \"34b1362996d1e0c2cea0bee73eb18468\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " Oct 14 13:17:51.956791 master-1 kubenswrapper[4740]: I1014 13:17:51.956793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") pod \"34b1362996d1e0c2cea0bee73eb18468\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " Oct 14 13:17:51.957406 master-1 kubenswrapper[4740]: I1014 13:17:51.956867 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") pod \"34b1362996d1e0c2cea0bee73eb18468\" (UID: \"34b1362996d1e0c2cea0bee73eb18468\") " Oct 14 13:17:51.957406 master-1 kubenswrapper[4740]: I1014 13:17:51.957291 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "34b1362996d1e0c2cea0bee73eb18468" (UID: "34b1362996d1e0c2cea0bee73eb18468"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:51.957406 master-1 kubenswrapper[4740]: I1014 13:17:51.957328 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "34b1362996d1e0c2cea0bee73eb18468" (UID: "34b1362996d1e0c2cea0bee73eb18468"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:51.957406 master-1 kubenswrapper[4740]: I1014 13:17:51.957347 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "34b1362996d1e0c2cea0bee73eb18468" (UID: "34b1362996d1e0c2cea0bee73eb18468"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:52.058224 master-1 kubenswrapper[4740]: I1014 13:17:52.058137 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:52.058224 master-1 kubenswrapper[4740]: I1014 13:17:52.058186 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:52.058224 master-1 kubenswrapper[4740]: I1014 13:17:52.058198 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/34b1362996d1e0c2cea0bee73eb18468-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:17:52.440776 master-1 kubenswrapper[4740]: I1014 13:17:52.440705 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:17:52.441014 master-1 kubenswrapper[4740]: I1014 13:17:52.440784 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:17:52.504329 master-1 kubenswrapper[4740]: I1014 13:17:52.504260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_34b1362996d1e0c2cea0bee73eb18468/kube-apiserver-cert-syncer/0.log" Oct 14 13:17:52.505071 master-1 kubenswrapper[4740]: I1014 13:17:52.504964 4740 generic.go:334] "Generic (PLEG): container finished" podID="34b1362996d1e0c2cea0bee73eb18468" containerID="af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00" exitCode=0 Oct 14 13:17:52.505071 master-1 kubenswrapper[4740]: I1014 13:17:52.505048 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:17:52.505293 master-1 kubenswrapper[4740]: I1014 13:17:52.505107 4740 scope.go:117] "RemoveContainer" containerID="bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641" Oct 14 13:17:52.525516 master-1 kubenswrapper[4740]: I1014 13:17:52.525462 4740 scope.go:117] "RemoveContainer" containerID="1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41" Oct 14 13:17:52.541429 master-1 kubenswrapper[4740]: I1014 13:17:52.541394 4740 scope.go:117] "RemoveContainer" containerID="15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac" Oct 14 13:17:52.556374 master-1 kubenswrapper[4740]: I1014 13:17:52.556285 4740 scope.go:117] "RemoveContainer" containerID="eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64" Oct 14 13:17:52.574128 master-1 kubenswrapper[4740]: I1014 13:17:52.574063 4740 scope.go:117] "RemoveContainer" containerID="af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00" Oct 14 13:17:52.593920 master-1 kubenswrapper[4740]: I1014 13:17:52.591536 4740 scope.go:117] "RemoveContainer" containerID="2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0" Oct 14 13:17:52.625588 master-1 kubenswrapper[4740]: I1014 13:17:52.624567 4740 scope.go:117] "RemoveContainer" containerID="bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641" Oct 14 13:17:52.625588 master-1 kubenswrapper[4740]: E1014 13:17:52.625188 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641\": container with ID starting with bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641 not found: ID does not exist" containerID="bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641" Oct 14 13:17:52.625588 master-1 kubenswrapper[4740]: I1014 13:17:52.625275 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641"} err="failed to get container status \"bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641\": rpc error: code = NotFound desc = could not find container \"bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641\": container with ID starting with bac0fffdc950ba2bb8fb59674710c0725e0d3567a294bad206f0d891dfb1d641 not found: ID does not exist" Oct 14 13:17:52.625588 master-1 kubenswrapper[4740]: I1014 13:17:52.625327 4740 scope.go:117] "RemoveContainer" containerID="1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: E1014 13:17:52.625753 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41\": container with ID starting with 1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41 not found: ID does not exist" containerID="1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: I1014 13:17:52.625798 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41"} err="failed to get container status \"1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41\": rpc error: code = NotFound desc = could not find container \"1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41\": container with ID starting with 1c6b1c78e4a7412ed9b72993bdc5b7f2ec7f6f740ac04c6bed2d01f15514af41 not found: ID does not exist" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: I1014 13:17:52.625826 4740 scope.go:117] "RemoveContainer" containerID="15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: E1014 13:17:52.626207 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac\": container with ID starting with 15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac not found: ID does not exist" containerID="15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: I1014 13:17:52.626294 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac"} err="failed to get container status \"15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac\": rpc error: code = NotFound desc = could not find container \"15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac\": container with ID starting with 15d54845b5f49b828165f9e88096b49238b04fe01341ab03c4c01c89db9465ac not found: ID does not exist" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: I1014 13:17:52.626335 4740 scope.go:117] "RemoveContainer" containerID="eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: E1014 13:17:52.626739 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64\": container with ID starting with eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64 not found: ID does not exist" containerID="eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: I1014 13:17:52.626786 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64"} err="failed to get container status \"eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64\": rpc error: code = NotFound desc = could not find container \"eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64\": container with ID starting with eebe98587083c34dc0c5267078ead8778e2a7c3db724b0310488503c3ca02f64 not found: ID does not exist" Oct 14 13:17:52.627129 master-1 kubenswrapper[4740]: I1014 13:17:52.626828 4740 scope.go:117] "RemoveContainer" containerID="af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00" Oct 14 13:17:52.628363 master-1 kubenswrapper[4740]: E1014 13:17:52.627314 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00\": container with ID starting with af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00 not found: ID does not exist" containerID="af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00" Oct 14 13:17:52.628363 master-1 kubenswrapper[4740]: I1014 13:17:52.627363 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00"} err="failed to get container status \"af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00\": rpc error: code = NotFound desc = could not find container \"af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00\": container with ID starting with af53c2758fa001372d14c1bfaa98a2607a88214e4029af3f7f5bdacf3cb11c00 not found: ID does not exist" Oct 14 13:17:52.628363 master-1 kubenswrapper[4740]: I1014 13:17:52.627396 4740 scope.go:117] "RemoveContainer" containerID="2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0" Oct 14 13:17:52.628363 master-1 kubenswrapper[4740]: E1014 13:17:52.628138 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0\": container with ID starting with 2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0 not found: ID does not exist" containerID="2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0" Oct 14 13:17:52.628363 master-1 kubenswrapper[4740]: I1014 13:17:52.628183 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0"} err="failed to get container status \"2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0\": rpc error: code = NotFound desc = could not find container \"2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0\": container with ID starting with 2c3015742548bc07475cdf435d08cf33207523b4030911cb323aa71e19ff2fe0 not found: ID does not exist" Oct 14 13:17:52.952741 master-1 kubenswrapper[4740]: I1014 13:17:52.952676 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b1362996d1e0c2cea0bee73eb18468" path="/var/lib/kubelet/pods/34b1362996d1e0c2cea0bee73eb18468/volumes" Oct 14 13:17:53.956917 master-1 kubenswrapper[4740]: I1014 13:17:53.956827 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:17:53.956917 master-1 kubenswrapper[4740]: I1014 13:17:53.956916 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:17:54.943666 master-1 kubenswrapper[4740]: I1014 13:17:54.943536 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:17:54.961719 master-1 kubenswrapper[4740]: I1014 13:17:54.961655 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:17:54.961719 master-1 kubenswrapper[4740]: I1014 13:17:54.961704 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:17:55.060044 master-1 kubenswrapper[4740]: E1014 13:17:55.059891 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:17:56.706882 master-1 kubenswrapper[4740]: I1014 13:17:56.706790 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:17:56.707893 master-1 kubenswrapper[4740]: I1014 13:17:56.706924 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:17:57.441632 master-1 kubenswrapper[4740]: I1014 13:17:57.441589 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:17:57.441926 master-1 kubenswrapper[4740]: I1014 13:17:57.441900 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:17:57.442064 master-1 kubenswrapper[4740]: I1014 13:17:57.442050 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:17:57.443076 master-1 kubenswrapper[4740]: I1014 13:17:57.442988 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:17:57.443207 master-1 kubenswrapper[4740]: I1014 13:17:57.443146 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:17:58.956964 master-1 kubenswrapper[4740]: I1014 13:17:58.956882 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:17:58.956964 master-1 kubenswrapper[4740]: I1014 13:17:58.956952 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:17:59.942844 master-1 kubenswrapper[4740]: I1014 13:17:59.942796 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:17:59.962197 master-1 kubenswrapper[4740]: I1014 13:17:59.962162 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:17:59.962766 master-1 kubenswrapper[4740]: I1014 13:17:59.962746 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:01.707033 master-1 kubenswrapper[4740]: I1014 13:18:01.706927 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:18:01.707033 master-1 kubenswrapper[4740]: I1014 13:18:01.707011 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:18:01.708125 master-1 kubenswrapper[4740]: I1014 13:18:01.707112 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:18:01.708125 master-1 kubenswrapper[4740]: I1014 13:18:01.707926 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:18:01.708125 master-1 kubenswrapper[4740]: I1014 13:18:01.708050 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:18:02.439935 master-1 kubenswrapper[4740]: I1014 13:18:02.439872 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:18:02.440304 master-1 kubenswrapper[4740]: I1014 13:18:02.439953 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:18:03.956673 master-1 kubenswrapper[4740]: I1014 13:18:03.956597 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:03.957394 master-1 kubenswrapper[4740]: I1014 13:18:03.956674 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:05.060340 master-1 kubenswrapper[4740]: E1014 13:18:05.060223 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.ocp.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-1?timeout=10s\": context deadline exceeded" Oct 14 13:18:06.707101 master-1 kubenswrapper[4740]: I1014 13:18:06.707005 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:18:06.707101 master-1 kubenswrapper[4740]: I1014 13:18:06.707095 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:18:07.440598 master-1 kubenswrapper[4740]: I1014 13:18:07.440511 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:18:07.440598 master-1 kubenswrapper[4740]: I1014 13:18:07.440590 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:18:08.944745 master-1 kubenswrapper[4740]: I1014 13:18:08.944672 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:08.945743 master-1 kubenswrapper[4740]: I1014 13:18:08.944755 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:08.956485 master-1 kubenswrapper[4740]: I1014 13:18:08.956384 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:08.956485 master-1 kubenswrapper[4740]: I1014 13:18:08.956466 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:09.998330 master-1 kubenswrapper[4740]: I1014 13:18:09.998267 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:10.018790 master-1 kubenswrapper[4740]: I1014 13:18:10.018726 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:10.059789 master-1 kubenswrapper[4740]: W1014 13:18:10.059708 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-fb8f2f44bcae1a186a655c93364d11e095a33cefe3d0fb53df6f97c9d907d695 WatchSource:0}: Error finding container fb8f2f44bcae1a186a655c93364d11e095a33cefe3d0fb53df6f97c9d907d695: Status 404 returned error can't find the container with id fb8f2f44bcae1a186a655c93364d11e095a33cefe3d0fb53df6f97c9d907d695 Oct 14 13:18:10.637505 master-1 kubenswrapper[4740]: I1014 13:18:10.637422 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"1050094e1399d2efd697dc283130c5f7","Type":"ContainerStarted","Data":"54f46dc9ca357d24aa0d18e8d5db0aee69d6d73cc41e66f9af2ffdab2e4b7cc3"} Oct 14 13:18:10.637505 master-1 kubenswrapper[4740]: I1014 13:18:10.637496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"1050094e1399d2efd697dc283130c5f7","Type":"ContainerStarted","Data":"516862ae041aab7390f584c0cbf3cdf2154c45cbdb2591237446bb7d27696ed4"} Oct 14 13:18:10.637505 master-1 kubenswrapper[4740]: I1014 13:18:10.637511 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"1050094e1399d2efd697dc283130c5f7","Type":"ContainerStarted","Data":"fb8f2f44bcae1a186a655c93364d11e095a33cefe3d0fb53df6f97c9d907d695"} Oct 14 13:18:11.653760 master-1 kubenswrapper[4740]: I1014 13:18:11.653692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"1050094e1399d2efd697dc283130c5f7","Type":"ContainerStarted","Data":"84816b63a679d0da082379c16b62aec3006ff768247ca2c54217f373f103c8e1"} Oct 14 13:18:11.653760 master-1 kubenswrapper[4740]: I1014 13:18:11.653758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"1050094e1399d2efd697dc283130c5f7","Type":"ContainerStarted","Data":"410d42ad1c03831b0b0e58b34e9c7c20fbce91f19d06aca1df997680840d4c82"} Oct 14 13:18:11.654554 master-1 kubenswrapper[4740]: I1014 13:18:11.654261 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:11.654554 master-1 kubenswrapper[4740]: I1014 13:18:11.654312 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:11.714579 master-1 kubenswrapper[4740]: I1014 13:18:11.714514 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" Oct 14 13:18:12.440963 master-1 kubenswrapper[4740]: I1014 13:18:12.440865 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:18:12.440963 master-1 kubenswrapper[4740]: I1014 13:18:12.440933 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:18:12.664805 master-1 kubenswrapper[4740]: I1014 13:18:12.664700 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 14 13:18:12.665844 master-1 kubenswrapper[4740]: I1014 13:18:12.665782 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler/0.log" Oct 14 13:18:12.666597 master-1 kubenswrapper[4740]: I1014 13:18:12.666530 4740 generic.go:334] "Generic (PLEG): container finished" podID="a61df698d34d049669621b2249bfe758" containerID="7ed5379248b9c8e16850c8587a413da8fce2a5280c56803e5377b6801674d1a9" exitCode=137 Oct 14 13:18:12.667117 master-1 kubenswrapper[4740]: I1014 13:18:12.667069 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:12.667117 master-1 kubenswrapper[4740]: I1014 13:18:12.667107 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:13.121029 master-1 kubenswrapper[4740]: I1014 13:18:13.120956 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 14 13:18:13.122073 master-1 kubenswrapper[4740]: I1014 13:18:13.122013 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler/0.log" Oct 14 13:18:13.123026 master-1 kubenswrapper[4740]: I1014 13:18:13.122953 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:18:13.169973 master-1 kubenswrapper[4740]: I1014 13:18:13.169870 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") pod \"a61df698d34d049669621b2249bfe758\" (UID: \"a61df698d34d049669621b2249bfe758\") " Oct 14 13:18:13.169973 master-1 kubenswrapper[4740]: I1014 13:18:13.169983 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") pod \"a61df698d34d049669621b2249bfe758\" (UID: \"a61df698d34d049669621b2249bfe758\") " Oct 14 13:18:13.170414 master-1 kubenswrapper[4740]: I1014 13:18:13.170027 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a61df698d34d049669621b2249bfe758" (UID: "a61df698d34d049669621b2249bfe758"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:18:13.170414 master-1 kubenswrapper[4740]: I1014 13:18:13.170176 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "a61df698d34d049669621b2249bfe758" (UID: "a61df698d34d049669621b2249bfe758"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:18:13.170650 master-1 kubenswrapper[4740]: I1014 13:18:13.170437 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:18:13.170650 master-1 kubenswrapper[4740]: I1014 13:18:13.170460 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a61df698d34d049669621b2249bfe758-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:18:13.676963 master-1 kubenswrapper[4740]: I1014 13:18:13.676865 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler-cert-syncer/0.log" Oct 14 13:18:13.678142 master-1 kubenswrapper[4740]: I1014 13:18:13.678086 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-1_a61df698d34d049669621b2249bfe758/kube-scheduler/0.log" Oct 14 13:18:13.678935 master-1 kubenswrapper[4740]: I1014 13:18:13.678877 4740 scope.go:117] "RemoveContainer" containerID="c237848c47768b8806a19f783f2d47f481ae5a551fb55ae77977077026c61294" Oct 14 13:18:13.679049 master-1 kubenswrapper[4740]: I1014 13:18:13.678982 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:18:13.702825 master-1 kubenswrapper[4740]: I1014 13:18:13.702773 4740 scope.go:117] "RemoveContainer" containerID="6fc564eebe0d572c7e176e3aca3156a0fc412212ac1fc3f10e1293f2dcc05d04" Oct 14 13:18:13.725661 master-1 kubenswrapper[4740]: I1014 13:18:13.725602 4740 scope.go:117] "RemoveContainer" containerID="7ed5379248b9c8e16850c8587a413da8fce2a5280c56803e5377b6801674d1a9" Oct 14 13:18:13.747398 master-1 kubenswrapper[4740]: I1014 13:18:13.747349 4740 scope.go:117] "RemoveContainer" containerID="8cf8d336358e5e89ddb3d21d4fac5892909c3f2b88f04a63d122268437bd6a7a" Oct 14 13:18:13.957174 master-1 kubenswrapper[4740]: I1014 13:18:13.956946 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:13.957174 master-1 kubenswrapper[4740]: I1014 13:18:13.957038 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:14.953959 master-1 kubenswrapper[4740]: I1014 13:18:14.953838 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61df698d34d049669621b2249bfe758" path="/var/lib/kubelet/pods/a61df698d34d049669621b2249bfe758/volumes" Oct 14 13:18:15.091140 master-1 kubenswrapper[4740]: I1014 13:18:15.091017 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-mzrkb_ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67/assisted-installer-controller/0.log" Oct 14 13:18:17.266459 master-1 kubenswrapper[4740]: I1014 13:18:17.266370 4740 scope.go:117] "RemoveContainer" containerID="e25a090fbeaf10ae15d12c1a5a4fc4c7f9e4949adb35ef26373fca7108a10da2" Oct 14 13:18:17.290555 master-1 kubenswrapper[4740]: I1014 13:18:17.290493 4740 scope.go:117] "RemoveContainer" containerID="1454c7db3bd11bf75bea8fa684ae07789621749144f0ddb7b02fe3b66731d7cd" Oct 14 13:18:17.308734 master-1 kubenswrapper[4740]: I1014 13:18:17.308673 4740 scope.go:117] "RemoveContainer" containerID="8f7f6048dbdc1a310a3e5e5e10294d23b83452d6cb4d457ef27b2ca284c65673" Oct 14 13:18:17.440399 master-1 kubenswrapper[4740]: I1014 13:18:17.440217 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:18:17.440834 master-1 kubenswrapper[4740]: I1014 13:18:17.440426 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:18:18.957442 master-1 kubenswrapper[4740]: I1014 13:18:18.956671 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:18.957442 master-1 kubenswrapper[4740]: I1014 13:18:18.956798 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:20.020463 master-1 kubenswrapper[4740]: I1014 13:18:20.019434 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.020463 master-1 kubenswrapper[4740]: I1014 13:18:20.019516 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.020463 master-1 kubenswrapper[4740]: I1014 13:18:20.019537 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.020463 master-1 kubenswrapper[4740]: I1014 13:18:20.020019 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:20.020463 master-1 kubenswrapper[4740]: I1014 13:18:20.020046 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:20.020463 master-1 kubenswrapper[4740]: I1014 13:18:20.020288 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.027806 master-1 kubenswrapper[4740]: I1014 13:18:20.027738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.028498 master-1 kubenswrapper[4740]: I1014 13:18:20.028440 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.746257 master-1 kubenswrapper[4740]: I1014 13:18:20.746178 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:20.746257 master-1 kubenswrapper[4740]: I1014 13:18:20.746260 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:20.755076 master-1 kubenswrapper[4740]: I1014 13:18:20.755028 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.755438 master-1 kubenswrapper[4740]: I1014 13:18:20.755402 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:18:20.943143 master-1 kubenswrapper[4740]: I1014 13:18:20.943065 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:18:20.967653 master-1 kubenswrapper[4740]: I1014 13:18:20.967597 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:20.967653 master-1 kubenswrapper[4740]: I1014 13:18:20.967646 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:20.989192 master-1 kubenswrapper[4740]: I1014 13:18:20.989109 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:18:21.008409 master-1 kubenswrapper[4740]: I1014 13:18:21.008286 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:18:21.030048 master-1 kubenswrapper[4740]: W1014 13:18:21.029979 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ffd3b5548bcf48fce7bfb9a8c802165.slice/crio-efb7f6ad2cf2eb22f37f1e5404d1e40726129144da8faae85480150c5fa4ff18 WatchSource:0}: Error finding container efb7f6ad2cf2eb22f37f1e5404d1e40726129144da8faae85480150c5fa4ff18: Status 404 returned error can't find the container with id efb7f6ad2cf2eb22f37f1e5404d1e40726129144da8faae85480150c5fa4ff18 Oct 14 13:18:21.445973 master-1 kubenswrapper[4740]: E1014 13:18:21.445822 4740 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Oct 14 13:18:21.445973 master-1 kubenswrapper[4740]: &Event{ObjectMeta:{openshift-kube-scheduler-guard-master-1.186e5d974a982653 openshift-kube-scheduler 11823 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-guard-master-1,UID:4d6c6f97-2228-4b4b-abd6-a4a6d00db759,APIVersion:v1,ResourceVersion:10732,FieldPath:spec.containers{guard},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.34.11:10259/healthz": dial tcp 192.168.34.11:10259: connect: connection refused Oct 14 13:18:21.445973 master-1 kubenswrapper[4740]: body: Oct 14 13:18:21.445973 master-1 kubenswrapper[4740]: ,Source:EventSource{Component:kubelet,Host:master-1,},FirstTimestamp:2025-10-14 13:10:08 +0000 UTC,LastTimestamp:2025-10-14 13:17:47.441444378 +0000 UTC m=+693.251733707,Count:18,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-1,} Oct 14 13:18:21.445973 master-1 kubenswrapper[4740]: > Oct 14 13:18:21.755520 master-1 kubenswrapper[4740]: I1014 13:18:21.755404 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"54c537ae4cbb495a96858059964f4d98af5ae1f2225e7f14871135c8b3c7e8f5"} Oct 14 13:18:21.755520 master-1 kubenswrapper[4740]: I1014 13:18:21.755482 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"efb7f6ad2cf2eb22f37f1e5404d1e40726129144da8faae85480150c5fa4ff18"} Oct 14 13:18:21.755979 master-1 kubenswrapper[4740]: I1014 13:18:21.755889 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:21.755979 master-1 kubenswrapper[4740]: I1014 13:18:21.755927 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:21.756478 master-1 kubenswrapper[4740]: I1014 13:18:21.756397 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:21.756566 master-1 kubenswrapper[4740]: I1014 13:18:21.756474 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:22.440446 master-1 kubenswrapper[4740]: I1014 13:18:22.440174 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:18:22.441511 master-1 kubenswrapper[4740]: I1014 13:18:22.441419 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:18:22.762716 master-1 kubenswrapper[4740]: I1014 13:18:22.762635 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:22.762716 master-1 kubenswrapper[4740]: I1014 13:18:22.762690 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:23.956732 master-1 kubenswrapper[4740]: I1014 13:18:23.956659 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:23.957330 master-1 kubenswrapper[4740]: I1014 13:18:23.956758 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:27.440466 master-1 kubenswrapper[4740]: I1014 13:18:27.440361 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" start-of-body= Oct 14 13:18:27.440466 master-1 kubenswrapper[4740]: I1014 13:18:27.440455 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": dial tcp 192.168.34.11:10259: connect: connection refused" Oct 14 13:18:28.814888 master-1 kubenswrapper[4740]: I1014 13:18:28.814716 4740 generic.go:334] "Generic (PLEG): container finished" podID="1ffd3b5548bcf48fce7bfb9a8c802165" containerID="54c537ae4cbb495a96858059964f4d98af5ae1f2225e7f14871135c8b3c7e8f5" exitCode=0 Oct 14 13:18:28.814888 master-1 kubenswrapper[4740]: I1014 13:18:28.814783 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerDied","Data":"54c537ae4cbb495a96858059964f4d98af5ae1f2225e7f14871135c8b3c7e8f5"} Oct 14 13:18:28.815916 master-1 kubenswrapper[4740]: I1014 13:18:28.815330 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:28.815916 master-1 kubenswrapper[4740]: I1014 13:18:28.815361 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:28.957351 master-1 kubenswrapper[4740]: I1014 13:18:28.957277 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:18:28.957713 master-1 kubenswrapper[4740]: I1014 13:18:28.957356 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:18:28.965970 master-1 kubenswrapper[4740]: E1014 13:18:28.965892 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:28.966511 master-1 kubenswrapper[4740]: I1014 13:18:28.966464 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:29.822948 master-1 kubenswrapper[4740]: I1014 13:18:29.822881 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"1a34a934f0c4bf2639fcb8208c08b6e076003b3a0922ff8141d35d442b8a26ef"} Oct 14 13:18:29.822948 master-1 kubenswrapper[4740]: I1014 13:18:29.822938 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"327175140cf59ab7ac88c5ea2e559cf227d62b5f1a4d3c63ba276efc173b7b62"} Oct 14 13:18:29.822948 master-1 kubenswrapper[4740]: I1014 13:18:29.822951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" event={"ID":"1ffd3b5548bcf48fce7bfb9a8c802165","Type":"ContainerStarted","Data":"cfa81d327f53a232dace2aee9fb219edaaca5b0cb1ff839a64917fea458181e6"} Oct 14 13:18:29.823608 master-1 kubenswrapper[4740]: I1014 13:18:29.823206 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:29.823608 master-1 kubenswrapper[4740]: I1014 13:18:29.823246 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:29.823608 master-1 kubenswrapper[4740]: I1014 13:18:29.823221 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:18:29.824958 master-1 kubenswrapper[4740]: I1014 13:18:29.824897 4740 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="0b4d74993a1401e4b6e850b179ab51065f53ea80ad8756c8a740b78b0804b4e2" exitCode=0 Oct 14 13:18:29.825025 master-1 kubenswrapper[4740]: I1014 13:18:29.824985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerDied","Data":"0b4d74993a1401e4b6e850b179ab51065f53ea80ad8756c8a740b78b0804b4e2"} Oct 14 13:18:29.825062 master-1 kubenswrapper[4740]: I1014 13:18:29.825038 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"bb1d7162e91ff31de74098c4631db8273125ed51c5833102d116437cfbf56eb9"} Oct 14 13:18:29.825531 master-1 kubenswrapper[4740]: I1014 13:18:29.825494 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:29.825531 master-1 kubenswrapper[4740]: I1014 13:18:29.825523 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:29.846876 master-1 kubenswrapper[4740]: I1014 13:18:29.846802 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:30.834787 master-1 kubenswrapper[4740]: I1014 13:18:30.834737 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:30.834787 master-1 kubenswrapper[4740]: I1014 13:18:30.834771 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:30.835310 master-1 kubenswrapper[4740]: I1014 13:18:30.834923 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"d5925b84c60ef6f3443add991b592427b7d32ac6283cdca5542873b4676b09d9"} Oct 14 13:18:30.835310 master-1 kubenswrapper[4740]: I1014 13:18:30.834946 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"3f98f494037c823a91c8e5e8cb3c5e66596570a1d3b528a3c2d4edd5aa660c69"} Oct 14 13:18:30.835310 master-1 kubenswrapper[4740]: I1014 13:18:30.834957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"4ce9abd39c3aeaad89568cd60fb0e427f27d0f38adcdff7f77bef90692c33338"} Oct 14 13:18:31.851500 master-1 kubenswrapper[4740]: I1014 13:18:31.851423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"e45346c1521e16aa358a9e0243b29f57c340b98cd05f02aa4089f7ed3a6ef8d0"} Oct 14 13:18:31.851500 master-1 kubenswrapper[4740]: I1014 13:18:31.851469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"e39186c2ebd02622803bdbec6984de2a","Type":"ContainerStarted","Data":"f8f2db597279287746568152e8aa7a3e94b07b8fc1075f744d7794b4d682afbc"} Oct 14 13:18:31.852517 master-1 kubenswrapper[4740]: I1014 13:18:31.851612 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:31.852517 master-1 kubenswrapper[4740]: I1014 13:18:31.851734 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:31.852517 master-1 kubenswrapper[4740]: I1014 13:18:31.851761 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:32.858987 master-1 kubenswrapper[4740]: I1014 13:18:32.858903 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:32.858987 master-1 kubenswrapper[4740]: I1014 13:18:32.858944 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:33.964293 master-1 kubenswrapper[4740]: I1014 13:18:33.964174 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:18:33.967015 master-1 kubenswrapper[4740]: I1014 13:18:33.966970 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:33.967128 master-1 kubenswrapper[4740]: I1014 13:18:33.967030 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:33.967575 master-1 kubenswrapper[4740]: I1014 13:18:33.967536 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:33.967575 master-1 kubenswrapper[4740]: I1014 13:18:33.967568 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:33.975050 master-1 kubenswrapper[4740]: I1014 13:18:33.974987 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:34.881276 master-1 kubenswrapper[4740]: I1014 13:18:34.881179 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:34.881276 master-1 kubenswrapper[4740]: I1014 13:18:34.881222 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:34.888937 master-1 kubenswrapper[4740]: I1014 13:18:34.888871 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:35.887381 master-1 kubenswrapper[4740]: I1014 13:18:35.887278 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:35.887381 master-1 kubenswrapper[4740]: I1014 13:18:35.887319 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:37.440446 master-1 kubenswrapper[4740]: I1014 13:18:37.440332 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:18:37.440446 master-1 kubenswrapper[4740]: I1014 13:18:37.440408 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:18:38.844707 master-1 kubenswrapper[4740]: I1014 13:18:38.844527 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:18:38.845760 master-1 kubenswrapper[4740]: I1014 13:18:38.845482 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:38.845760 master-1 kubenswrapper[4740]: I1014 13:18:38.845520 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:38.848087 master-1 kubenswrapper[4740]: I1014 13:18:38.848044 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:18:38.854037 master-1 kubenswrapper[4740]: I1014 13:18:38.853975 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:18:38.859668 master-1 kubenswrapper[4740]: I1014 13:18:38.859611 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:18:38.860392 master-1 kubenswrapper[4740]: I1014 13:18:38.860343 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:38.860392 master-1 kubenswrapper[4740]: I1014 13:18:38.860384 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:38.866859 master-1 kubenswrapper[4740]: I1014 13:18:38.866794 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:18:38.872635 master-1 kubenswrapper[4740]: I1014 13:18:38.872577 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-1"] Oct 14 13:18:38.878395 master-1 kubenswrapper[4740]: I1014 13:18:38.878338 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:18:38.878985 master-1 kubenswrapper[4740]: I1014 13:18:38.878946 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:38.878985 master-1 kubenswrapper[4740]: I1014 13:18:38.878970 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:38.883799 master-1 kubenswrapper[4740]: I1014 13:18:38.883718 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:18:38.886570 master-1 kubenswrapper[4740]: I1014 13:18:38.886530 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:18:38.908801 master-1 kubenswrapper[4740]: I1014 13:18:38.908661 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:38.908801 master-1 kubenswrapper[4740]: I1014 13:18:38.908728 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podUID="b47d4525-6546-45c3-96f8-dd43be5a9a1a" Oct 14 13:18:38.909863 master-1 kubenswrapper[4740]: I1014 13:18:38.909817 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:38.909863 master-1 kubenswrapper[4740]: I1014 13:18:38.909845 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="5ccf5b88-f886-4202-be03-ec07969a54e7" Oct 14 13:18:38.910181 master-1 kubenswrapper[4740]: I1014 13:18:38.910119 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:38.910181 master-1 kubenswrapper[4740]: I1014 13:18:38.910172 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="5c674716-0f58-4f57-8fc0-c6ffed7e53eb" Oct 14 13:18:42.440870 master-1 kubenswrapper[4740]: I1014 13:18:42.440751 4740 patch_prober.go:28] interesting pod/openshift-kube-scheduler-guard-master-1 container/guard namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.34.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:18:42.440870 master-1 kubenswrapper[4740]: I1014 13:18:42.440853 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" podUID="4d6c6f97-2228-4b4b-abd6-a4a6d00db759" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:18:43.812753 master-1 kubenswrapper[4740]: I1014 13:18:43.812676 4740 patch_prober.go:28] interesting pod/marketplace-operator-c4f798dd4-djh96 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Oct 14 13:18:43.813660 master-1 kubenswrapper[4740]: I1014 13:18:43.812748 4740 patch_prober.go:28] interesting pod/marketplace-operator-c4f798dd4-djh96 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" start-of-body= Oct 14 13:18:43.813660 master-1 kubenswrapper[4740]: I1014 13:18:43.812769 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" podUID="2a106ff8-388a-4d30-8370-aad661eb4365" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Oct 14 13:18:43.813660 master-1 kubenswrapper[4740]: I1014 13:18:43.812856 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" podUID="2a106ff8-388a-4d30-8370-aad661eb4365" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.25:8080/healthz\": dial tcp 10.128.0.25:8080: connect: connection refused" Oct 14 13:18:43.956590 master-1 kubenswrapper[4740]: I1014 13:18:43.956490 4740 generic.go:334] "Generic (PLEG): container finished" podID="2a106ff8-388a-4d30-8370-aad661eb4365" containerID="103a0a432a550549596fe64f0652cd85127a6a4c94458fd9714e55d1dbc13041" exitCode=0 Oct 14 13:18:43.956590 master-1 kubenswrapper[4740]: I1014 13:18:43.956553 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" event={"ID":"2a106ff8-388a-4d30-8370-aad661eb4365","Type":"ContainerDied","Data":"103a0a432a550549596fe64f0652cd85127a6a4c94458fd9714e55d1dbc13041"} Oct 14 13:18:43.957373 master-1 kubenswrapper[4740]: I1014 13:18:43.957333 4740 scope.go:117] "RemoveContainer" containerID="103a0a432a550549596fe64f0652cd85127a6a4c94458fd9714e55d1dbc13041" Oct 14 13:18:44.773616 master-1 kubenswrapper[4740]: I1014 13:18:44.773572 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-guard-master-1" Oct 14 13:18:44.966083 master-1 kubenswrapper[4740]: I1014 13:18:44.965953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" event={"ID":"2a106ff8-388a-4d30-8370-aad661eb4365","Type":"ContainerStarted","Data":"08331b7c1f1029a70978cc19bd7e5408fcc4c113071b95dfc8173719e4022889"} Oct 14 13:18:44.967114 master-1 kubenswrapper[4740]: I1014 13:18:44.967029 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:18:44.970189 master-1 kubenswrapper[4740]: I1014 13:18:44.970129 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-c4f798dd4-djh96" Oct 14 13:18:48.973204 master-1 kubenswrapper[4740]: I1014 13:18:48.973005 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:18:49.523803 master-1 kubenswrapper[4740]: I1014 13:18:49.523713 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:18:49.524197 master-1 kubenswrapper[4740]: E1014 13:18:49.523915 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:20:51.523880825 +0000 UTC m=+877.334170184 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:18:49.624783 master-1 kubenswrapper[4740]: I1014 13:18:49.624692 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:18:49.625114 master-1 kubenswrapper[4740]: E1014 13:18:49.624901 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:20:51.624869887 +0000 UTC m=+877.435159246 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:18:49.767686 master-1 kubenswrapper[4740]: I1014 13:18:49.767611 4740 status_manager.go:851] "Failed to get status for pod" podUID="28eeaa8e-ec52-426b-a893-ccce40030c9b" pod="openshift-kube-scheduler/installer-6-master-1" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)" Oct 14 13:18:49.811690 master-1 kubenswrapper[4740]: I1014 13:18:49.811459 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=20.811431326 podStartE2EDuration="20.811431326s" podCreationTimestamp="2025-10-14 13:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:18:49.81087823 +0000 UTC m=+755.621167599" watchObservedRunningTime="2025-10-14 13:18:49.811431326 +0000 UTC m=+755.621720685" Oct 14 13:18:49.953439 master-1 kubenswrapper[4740]: I1014 13:18:49.953305 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podStartSLOduration=39.953280593 podStartE2EDuration="39.953280593s" podCreationTimestamp="2025-10-14 13:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:18:49.952724377 +0000 UTC m=+755.763013746" watchObservedRunningTime="2025-10-14 13:18:49.953280593 +0000 UTC m=+755.763569962" Oct 14 13:18:49.979553 master-1 kubenswrapper[4740]: I1014 13:18:49.979454 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" podStartSLOduration=28.979434062 podStartE2EDuration="28.979434062s" podCreationTimestamp="2025-10-14 13:18:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:18:49.978805395 +0000 UTC m=+755.789094764" watchObservedRunningTime="2025-10-14 13:18:49.979434062 +0000 UTC m=+755.789723431" Oct 14 13:18:49.997402 master-1 kubenswrapper[4740]: E1014 13:18:49.997329 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podUID="cc579fa5-c1e0-40ed-b1f3-e953a42e74d6" Oct 14 13:18:49.997402 master-1 kubenswrapper[4740]: E1014 13:18:49.997346 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podUID="180ced15-1fb1-464d-85f2-0bcc0d836dab" Oct 14 13:18:51.006443 master-1 kubenswrapper[4740]: I1014 13:18:51.006361 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:18:51.006443 master-1 kubenswrapper[4740]: I1014 13:18:51.006428 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:19:07.129032 master-1 kubenswrapper[4740]: I1014 13:19:07.128946 4740 generic.go:334] "Generic (PLEG): container finished" podID="c4ca808a-394d-4a17-ac12-1df264c7ed92" containerID="b61df5cfa8541e3132f5a70893b90c6aeb0cc1ace2485b37f230173855705d39" exitCode=0 Oct 14 13:19:07.129032 master-1 kubenswrapper[4740]: I1014 13:19:07.129020 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" event={"ID":"c4ca808a-394d-4a17-ac12-1df264c7ed92","Type":"ContainerDied","Data":"b61df5cfa8541e3132f5a70893b90c6aeb0cc1ace2485b37f230173855705d39"} Oct 14 13:19:07.129908 master-1 kubenswrapper[4740]: I1014 13:19:07.129746 4740 scope.go:117] "RemoveContainer" containerID="b61df5cfa8541e3132f5a70893b90c6aeb0cc1ace2485b37f230173855705d39" Oct 14 13:19:08.138283 master-1 kubenswrapper[4740]: I1014 13:19:08.138191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc" event={"ID":"c4ca808a-394d-4a17-ac12-1df264c7ed92","Type":"ContainerStarted","Data":"6e6ffb4a4fe439d7a63719179e7fca557382bc2e031db1ff20eded5bf7b98ecd"} Oct 14 13:19:09.148832 master-1 kubenswrapper[4740]: I1014 13:19:09.148727 4740 generic.go:334] "Generic (PLEG): container finished" podID="eae22243-e292-4623-90b4-dae431cf47dc" containerID="f5655dabf1018f785c93b92fbbbc4713ff153e0d4dbb155184adb636f3b0c938" exitCode=0 Oct 14 13:19:09.148832 master-1 kubenswrapper[4740]: I1014 13:19:09.148785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" event={"ID":"eae22243-e292-4623-90b4-dae431cf47dc","Type":"ContainerDied","Data":"f5655dabf1018f785c93b92fbbbc4713ff153e0d4dbb155184adb636f3b0c938"} Oct 14 13:19:09.148832 master-1 kubenswrapper[4740]: I1014 13:19:09.148830 4740 scope.go:117] "RemoveContainer" containerID="fe0263de8180e4d07e93f75cd5e428f39e11c32e6586b3b42beb63acb6a0eea2" Oct 14 13:19:09.150078 master-1 kubenswrapper[4740]: I1014 13:19:09.149374 4740 scope.go:117] "RemoveContainer" containerID="f5655dabf1018f785c93b92fbbbc4713ff153e0d4dbb155184adb636f3b0c938" Oct 14 13:19:09.150078 master-1 kubenswrapper[4740]: E1014 13:19:09.149641 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-854f54f8c9-t6kgz_openshift-network-operator(eae22243-e292-4623-90b4-dae431cf47dc)\"" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" podUID="eae22243-e292-4623-90b4-dae431cf47dc" Oct 14 13:19:21.016647 master-1 kubenswrapper[4740]: I1014 13:19:21.016484 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-1" Oct 14 13:19:21.943798 master-1 kubenswrapper[4740]: I1014 13:19:21.943715 4740 scope.go:117] "RemoveContainer" containerID="f5655dabf1018f785c93b92fbbbc4713ff153e0d4dbb155184adb636f3b0c938" Oct 14 13:19:23.243661 master-1 kubenswrapper[4740]: I1014 13:19:23.243564 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-854f54f8c9-t6kgz" event={"ID":"eae22243-e292-4623-90b4-dae431cf47dc","Type":"ContainerStarted","Data":"f9cce6412b4069bec58570945080b5fe8069bb9dc970b23cbbdbf0968e160c63"} Oct 14 13:19:24.253515 master-1 kubenswrapper[4740]: I1014 13:19:24.253343 4740 generic.go:334] "Generic (PLEG): container finished" podID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerID="ffd4998245ebc17a6f03025aacb5ec867c7637eefba8864af77e8d4e546113b1" exitCode=0 Oct 14 13:19:24.253515 master-1 kubenswrapper[4740]: I1014 13:19:24.253464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerDied","Data":"ffd4998245ebc17a6f03025aacb5ec867c7637eefba8864af77e8d4e546113b1"} Oct 14 13:19:24.254529 master-1 kubenswrapper[4740]: I1014 13:19:24.254287 4740 scope.go:117] "RemoveContainer" containerID="ffd4998245ebc17a6f03025aacb5ec867c7637eefba8864af77e8d4e546113b1" Oct 14 13:19:24.859754 master-1 kubenswrapper[4740]: I1014 13:19:24.859677 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:19:24.859754 master-1 kubenswrapper[4740]: I1014 13:19:24.859745 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:19:25.265220 master-1 kubenswrapper[4740]: I1014 13:19:25.265136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerStarted","Data":"61e2daca2897fcccbe37061c0f5b0d2fe210930fbd45f1ce31fa38a3f52c60ff"} Oct 14 13:19:25.266020 master-1 kubenswrapper[4740]: I1014 13:19:25.265713 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:19:25.273697 master-1 kubenswrapper[4740]: I1014 13:19:25.273635 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:19:27.838331 master-1 kubenswrapper[4740]: E1014 13:19:27.838282 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:19:27.839454 master-1 kubenswrapper[4740]: E1014 13:19:27.839424 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:21:29.839395459 +0000 UTC m=+915.649684818 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:19:32.506761 master-1 kubenswrapper[4740]: I1014 13:19:32.506690 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 14 13:19:32.507303 master-1 kubenswrapper[4740]: E1014 13:19:32.507008 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530f21ca-695c-4cd9-a086-08aff304d820" containerName="installer" Oct 14 13:19:32.507303 master-1 kubenswrapper[4740]: I1014 13:19:32.507029 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="530f21ca-695c-4cd9-a086-08aff304d820" containerName="installer" Oct 14 13:19:32.507303 master-1 kubenswrapper[4740]: I1014 13:19:32.507161 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="530f21ca-695c-4cd9-a086-08aff304d820" containerName="installer" Oct 14 13:19:32.507708 master-1 kubenswrapper[4740]: I1014 13:19:32.507680 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.512334 master-1 kubenswrapper[4740]: I1014 13:19:32.510828 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-var-lock\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.512334 master-1 kubenswrapper[4740]: I1014 13:19:32.510922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a0192d3-865e-4cc8-8e55-a20fa738671d-kube-api-access\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.512334 master-1 kubenswrapper[4740]: I1014 13:19:32.510957 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-p7d8w" Oct 14 13:19:32.512334 master-1 kubenswrapper[4740]: I1014 13:19:32.511003 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.519895 master-1 kubenswrapper[4740]: I1014 13:19:32.519848 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 14 13:19:32.611869 master-1 kubenswrapper[4740]: I1014 13:19:32.611793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.612113 master-1 kubenswrapper[4740]: I1014 13:19:32.611943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-var-lock\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.612113 master-1 kubenswrapper[4740]: I1014 13:19:32.611984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a0192d3-865e-4cc8-8e55-a20fa738671d-kube-api-access\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.612113 master-1 kubenswrapper[4740]: I1014 13:19:32.612019 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-kubelet-dir\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.612288 master-1 kubenswrapper[4740]: I1014 13:19:32.612113 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-var-lock\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.647868 master-1 kubenswrapper[4740]: I1014 13:19:32.647826 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a0192d3-865e-4cc8-8e55-a20fa738671d-kube-api-access\") pod \"installer-4-master-1\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:32.825035 master-1 kubenswrapper[4740]: I1014 13:19:32.824821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:19:33.312140 master-1 kubenswrapper[4740]: I1014 13:19:33.312082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 14 13:19:33.317187 master-1 kubenswrapper[4740]: W1014 13:19:33.317139 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7a0192d3_865e_4cc8_8e55_a20fa738671d.slice/crio-9c63ab89eb7951d7e13ada86f8f9b5b93f74b2f5cc175ef681fbc6f8b086bd80 WatchSource:0}: Error finding container 9c63ab89eb7951d7e13ada86f8f9b5b93f74b2f5cc175ef681fbc6f8b086bd80: Status 404 returned error can't find the container with id 9c63ab89eb7951d7e13ada86f8f9b5b93f74b2f5cc175ef681fbc6f8b086bd80 Oct 14 13:19:33.331497 master-1 kubenswrapper[4740]: I1014 13:19:33.331421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"7a0192d3-865e-4cc8-8e55-a20fa738671d","Type":"ContainerStarted","Data":"9c63ab89eb7951d7e13ada86f8f9b5b93f74b2f5cc175ef681fbc6f8b086bd80"} Oct 14 13:19:34.348288 master-1 kubenswrapper[4740]: I1014 13:19:34.348174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"7a0192d3-865e-4cc8-8e55-a20fa738671d","Type":"ContainerStarted","Data":"2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079"} Oct 14 13:19:34.385302 master-1 kubenswrapper[4740]: I1014 13:19:34.385190 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-1" podStartSLOduration=2.385163766 podStartE2EDuration="2.385163766s" podCreationTimestamp="2025-10-14 13:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:19:34.375705366 +0000 UTC m=+800.185994795" watchObservedRunningTime="2025-10-14 13:19:34.385163766 +0000 UTC m=+800.195453125" Oct 14 13:19:51.307743 master-1 kubenswrapper[4740]: I1014 13:19:51.307622 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 14 13:19:51.308833 master-1 kubenswrapper[4740]: I1014 13:19:51.308321 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-1" podUID="7a0192d3-865e-4cc8-8e55-a20fa738671d" containerName="installer" containerID="cri-o://2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079" gracePeriod=30 Oct 14 13:19:56.106627 master-1 kubenswrapper[4740]: I1014 13:19:56.106560 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-1"] Oct 14 13:19:56.107693 master-1 kubenswrapper[4740]: I1014 13:19:56.107650 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.121409 master-1 kubenswrapper[4740]: I1014 13:19:56.121365 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-1"] Oct 14 13:19:56.164968 master-1 kubenswrapper[4740]: I1014 13:19:56.164884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-var-lock\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.165194 master-1 kubenswrapper[4740]: I1014 13:19:56.165038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.165194 master-1 kubenswrapper[4740]: I1014 13:19:56.165148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/890e089e-991a-46b8-87ed-22aa882c98b0-kube-api-access\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.267184 master-1 kubenswrapper[4740]: I1014 13:19:56.267121 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-var-lock\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.267184 master-1 kubenswrapper[4740]: I1014 13:19:56.267186 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.267431 master-1 kubenswrapper[4740]: I1014 13:19:56.267246 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/890e089e-991a-46b8-87ed-22aa882c98b0-kube-api-access\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.267431 master-1 kubenswrapper[4740]: I1014 13:19:56.267328 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-var-lock\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.267506 master-1 kubenswrapper[4740]: I1014 13:19:56.267407 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-kubelet-dir\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.303074 master-1 kubenswrapper[4740]: I1014 13:19:56.303021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/890e089e-991a-46b8-87ed-22aa882c98b0-kube-api-access\") pod \"installer-5-master-1\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.432899 master-1 kubenswrapper[4740]: I1014 13:19:56.432691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:19:56.857479 master-1 kubenswrapper[4740]: I1014 13:19:56.857413 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-1"] Oct 14 13:19:57.539871 master-1 kubenswrapper[4740]: I1014 13:19:57.539800 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"890e089e-991a-46b8-87ed-22aa882c98b0","Type":"ContainerStarted","Data":"2023d7e5f7d8ebd2e5fbb308a39411be045784ef99f9b18924e2e59291c0ad7c"} Oct 14 13:19:57.539871 master-1 kubenswrapper[4740]: I1014 13:19:57.539870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"890e089e-991a-46b8-87ed-22aa882c98b0","Type":"ContainerStarted","Data":"a1c655ae79c34ec2f62ab60bdbc6baa8f704941385973a0648a0e6709682a373"} Oct 14 13:19:57.574886 master-1 kubenswrapper[4740]: I1014 13:19:57.574790 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-1" podStartSLOduration=1.574755006 podStartE2EDuration="1.574755006s" podCreationTimestamp="2025-10-14 13:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:19:57.565587085 +0000 UTC m=+823.375876444" watchObservedRunningTime="2025-10-14 13:19:57.574755006 +0000 UTC m=+823.385044375" Oct 14 13:20:05.355086 master-1 kubenswrapper[4740]: I1014 13:20:05.355049 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-1_7a0192d3-865e-4cc8-8e55-a20fa738671d/installer/0.log" Oct 14 13:20:05.355614 master-1 kubenswrapper[4740]: I1014 13:20:05.355135 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:20:05.409534 master-1 kubenswrapper[4740]: I1014 13:20:05.409469 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-kubelet-dir\") pod \"7a0192d3-865e-4cc8-8e55-a20fa738671d\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " Oct 14 13:20:05.409765 master-1 kubenswrapper[4740]: I1014 13:20:05.409553 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-var-lock\") pod \"7a0192d3-865e-4cc8-8e55-a20fa738671d\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " Oct 14 13:20:05.409765 master-1 kubenswrapper[4740]: I1014 13:20:05.409580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a0192d3-865e-4cc8-8e55-a20fa738671d-kube-api-access\") pod \"7a0192d3-865e-4cc8-8e55-a20fa738671d\" (UID: \"7a0192d3-865e-4cc8-8e55-a20fa738671d\") " Oct 14 13:20:05.409765 master-1 kubenswrapper[4740]: I1014 13:20:05.409653 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a0192d3-865e-4cc8-8e55-a20fa738671d" (UID: "7a0192d3-865e-4cc8-8e55-a20fa738671d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:20:05.409992 master-1 kubenswrapper[4740]: I1014 13:20:05.409747 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-var-lock" (OuterVolumeSpecName: "var-lock") pod "7a0192d3-865e-4cc8-8e55-a20fa738671d" (UID: "7a0192d3-865e-4cc8-8e55-a20fa738671d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:20:05.413060 master-1 kubenswrapper[4740]: I1014 13:20:05.413005 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0192d3-865e-4cc8-8e55-a20fa738671d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a0192d3-865e-4cc8-8e55-a20fa738671d" (UID: "7a0192d3-865e-4cc8-8e55-a20fa738671d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:20:05.511535 master-1 kubenswrapper[4740]: I1014 13:20:05.511467 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:20:05.511535 master-1 kubenswrapper[4740]: I1014 13:20:05.511502 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a0192d3-865e-4cc8-8e55-a20fa738671d-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:20:05.511535 master-1 kubenswrapper[4740]: I1014 13:20:05.511512 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a0192d3-865e-4cc8-8e55-a20fa738671d-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:20:05.601421 master-1 kubenswrapper[4740]: I1014 13:20:05.600632 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-1_7a0192d3-865e-4cc8-8e55-a20fa738671d/installer/0.log" Oct 14 13:20:05.601421 master-1 kubenswrapper[4740]: I1014 13:20:05.600752 4740 generic.go:334] "Generic (PLEG): container finished" podID="7a0192d3-865e-4cc8-8e55-a20fa738671d" containerID="2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079" exitCode=1 Oct 14 13:20:05.601421 master-1 kubenswrapper[4740]: I1014 13:20:05.600805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"7a0192d3-865e-4cc8-8e55-a20fa738671d","Type":"ContainerDied","Data":"2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079"} Oct 14 13:20:05.601421 master-1 kubenswrapper[4740]: I1014 13:20:05.600851 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-1" event={"ID":"7a0192d3-865e-4cc8-8e55-a20fa738671d","Type":"ContainerDied","Data":"9c63ab89eb7951d7e13ada86f8f9b5b93f74b2f5cc175ef681fbc6f8b086bd80"} Oct 14 13:20:05.601421 master-1 kubenswrapper[4740]: I1014 13:20:05.600904 4740 scope.go:117] "RemoveContainer" containerID="2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079" Oct 14 13:20:05.601421 master-1 kubenswrapper[4740]: I1014 13:20:05.601208 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-1" Oct 14 13:20:05.624218 master-1 kubenswrapper[4740]: I1014 13:20:05.624172 4740 scope.go:117] "RemoveContainer" containerID="2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079" Oct 14 13:20:05.624767 master-1 kubenswrapper[4740]: E1014 13:20:05.624701 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079\": container with ID starting with 2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079 not found: ID does not exist" containerID="2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079" Oct 14 13:20:05.624767 master-1 kubenswrapper[4740]: I1014 13:20:05.624746 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079"} err="failed to get container status \"2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079\": rpc error: code = NotFound desc = could not find container \"2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079\": container with ID starting with 2c5ba7cd5f8f4e21f2c6cb7e422a4f226269526c6e89bfc2e665f24416d26079 not found: ID does not exist" Oct 14 13:20:05.655290 master-1 kubenswrapper[4740]: I1014 13:20:05.655185 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 14 13:20:05.667519 master-1 kubenswrapper[4740]: I1014 13:20:05.667456 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-1"] Oct 14 13:20:06.958422 master-1 kubenswrapper[4740]: I1014 13:20:06.958310 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0192d3-865e-4cc8-8e55-a20fa738671d" path="/var/lib/kubelet/pods/7a0192d3-865e-4cc8-8e55-a20fa738671d/volumes" Oct 14 13:20:45.521200 master-1 kubenswrapper[4740]: I1014 13:20:45.521026 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:20:45.522402 master-1 kubenswrapper[4740]: I1014 13:20:45.521522 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" containerID="cri-o://4ce9abd39c3aeaad89568cd60fb0e427f27d0f38adcdff7f77bef90692c33338" gracePeriod=135 Oct 14 13:20:45.522402 master-1 kubenswrapper[4740]: I1014 13:20:45.521725 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e45346c1521e16aa358a9e0243b29f57c340b98cd05f02aa4089f7ed3a6ef8d0" gracePeriod=135 Oct 14 13:20:45.522402 master-1 kubenswrapper[4740]: I1014 13:20:45.521801 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f8f2db597279287746568152e8aa7a3e94b07b8fc1075f744d7794b4d682afbc" gracePeriod=135 Oct 14 13:20:45.522402 master-1 kubenswrapper[4740]: I1014 13:20:45.521873 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d5925b84c60ef6f3443add991b592427b7d32ac6283cdca5542873b4676b09d9" gracePeriod=135 Oct 14 13:20:45.522402 master-1 kubenswrapper[4740]: I1014 13:20:45.521938 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3f98f494037c823a91c8e5e8cb3c5e66596570a1d3b528a3c2d4edd5aa660c69" gracePeriod=135 Oct 14 13:20:45.525274 master-1 kubenswrapper[4740]: I1014 13:20:45.525173 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:20:45.525678 master-1 kubenswrapper[4740]: E1014 13:20:45.525615 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="setup" Oct 14 13:20:45.525678 master-1 kubenswrapper[4740]: I1014 13:20:45.525648 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="setup" Oct 14 13:20:45.525678 master-1 kubenswrapper[4740]: E1014 13:20:45.525669 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: I1014 13:20:45.525715 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: E1014 13:20:45.525745 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: I1014 13:20:45.525760 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: E1014 13:20:45.525788 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: I1014 13:20:45.525800 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: E1014 13:20:45.525819 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: I1014 13:20:45.525831 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: E1014 13:20:45.525847 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: I1014 13:20:45.525860 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: E1014 13:20:45.525877 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0192d3-865e-4cc8-8e55-a20fa738671d" containerName="installer" Oct 14 13:20:45.526014 master-1 kubenswrapper[4740]: I1014 13:20:45.525891 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0192d3-865e-4cc8-8e55-a20fa738671d" containerName="installer" Oct 14 13:20:45.529170 master-1 kubenswrapper[4740]: I1014 13:20:45.526129 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-check-endpoints" Oct 14 13:20:45.529170 master-1 kubenswrapper[4740]: I1014 13:20:45.526153 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver" Oct 14 13:20:45.529170 master-1 kubenswrapper[4740]: I1014 13:20:45.526181 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:20:45.529170 master-1 kubenswrapper[4740]: I1014 13:20:45.526206 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-cert-syncer" Oct 14 13:20:45.529170 master-1 kubenswrapper[4740]: I1014 13:20:45.526259 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0192d3-865e-4cc8-8e55-a20fa738671d" containerName="installer" Oct 14 13:20:45.529170 master-1 kubenswrapper[4740]: I1014 13:20:45.526277 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39186c2ebd02622803bdbec6984de2a" containerName="kube-apiserver-insecure-readyz" Oct 14 13:20:45.639788 master-1 kubenswrapper[4740]: I1014 13:20:45.639710 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.640045 master-1 kubenswrapper[4740]: I1014 13:20:45.639932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.640045 master-1 kubenswrapper[4740]: I1014 13:20:45.640010 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.741392 master-1 kubenswrapper[4740]: I1014 13:20:45.741220 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.741392 master-1 kubenswrapper[4740]: I1014 13:20:45.741303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.741392 master-1 kubenswrapper[4740]: I1014 13:20:45.741385 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.741713 master-1 kubenswrapper[4740]: I1014 13:20:45.741385 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.741713 master-1 kubenswrapper[4740]: I1014 13:20:45.741478 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.741713 master-1 kubenswrapper[4740]: I1014 13:20:45.741481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:20:45.921472 master-1 kubenswrapper[4740]: I1014 13:20:45.921406 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 14 13:20:45.922799 master-1 kubenswrapper[4740]: I1014 13:20:45.922631 4740 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="e45346c1521e16aa358a9e0243b29f57c340b98cd05f02aa4089f7ed3a6ef8d0" exitCode=0 Oct 14 13:20:45.922799 master-1 kubenswrapper[4740]: I1014 13:20:45.922675 4740 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="f8f2db597279287746568152e8aa7a3e94b07b8fc1075f744d7794b4d682afbc" exitCode=0 Oct 14 13:20:45.922799 master-1 kubenswrapper[4740]: I1014 13:20:45.922689 4740 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="d5925b84c60ef6f3443add991b592427b7d32ac6283cdca5542873b4676b09d9" exitCode=0 Oct 14 13:20:45.922799 master-1 kubenswrapper[4740]: I1014 13:20:45.922703 4740 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="3f98f494037c823a91c8e5e8cb3c5e66596570a1d3b528a3c2d4edd5aa660c69" exitCode=2 Oct 14 13:20:45.924949 master-1 kubenswrapper[4740]: I1014 13:20:45.924910 4740 generic.go:334] "Generic (PLEG): container finished" podID="890e089e-991a-46b8-87ed-22aa882c98b0" containerID="2023d7e5f7d8ebd2e5fbb308a39411be045784ef99f9b18924e2e59291c0ad7c" exitCode=0 Oct 14 13:20:45.925053 master-1 kubenswrapper[4740]: I1014 13:20:45.924958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"890e089e-991a-46b8-87ed-22aa882c98b0","Type":"ContainerDied","Data":"2023d7e5f7d8ebd2e5fbb308a39411be045784ef99f9b18924e2e59291c0ad7c"} Oct 14 13:20:45.962833 master-1 kubenswrapper[4740]: I1014 13:20:45.962661 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 14 13:20:47.450536 master-1 kubenswrapper[4740]: I1014 13:20:47.450413 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:20:47.567207 master-1 kubenswrapper[4740]: I1014 13:20:47.567119 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-var-lock\") pod \"890e089e-991a-46b8-87ed-22aa882c98b0\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " Oct 14 13:20:47.567508 master-1 kubenswrapper[4740]: I1014 13:20:47.567372 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/890e089e-991a-46b8-87ed-22aa882c98b0-kube-api-access\") pod \"890e089e-991a-46b8-87ed-22aa882c98b0\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " Oct 14 13:20:47.567508 master-1 kubenswrapper[4740]: I1014 13:20:47.567366 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "890e089e-991a-46b8-87ed-22aa882c98b0" (UID: "890e089e-991a-46b8-87ed-22aa882c98b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:20:47.567508 master-1 kubenswrapper[4740]: I1014 13:20:47.567505 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-kubelet-dir\") pod \"890e089e-991a-46b8-87ed-22aa882c98b0\" (UID: \"890e089e-991a-46b8-87ed-22aa882c98b0\") " Oct 14 13:20:47.567673 master-1 kubenswrapper[4740]: I1014 13:20:47.567555 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "890e089e-991a-46b8-87ed-22aa882c98b0" (UID: "890e089e-991a-46b8-87ed-22aa882c98b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:20:47.567946 master-1 kubenswrapper[4740]: I1014 13:20:47.567901 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:20:47.567946 master-1 kubenswrapper[4740]: I1014 13:20:47.567936 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/890e089e-991a-46b8-87ed-22aa882c98b0-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:20:47.571631 master-1 kubenswrapper[4740]: I1014 13:20:47.571571 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890e089e-991a-46b8-87ed-22aa882c98b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "890e089e-991a-46b8-87ed-22aa882c98b0" (UID: "890e089e-991a-46b8-87ed-22aa882c98b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:20:47.668921 master-1 kubenswrapper[4740]: I1014 13:20:47.668751 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/890e089e-991a-46b8-87ed-22aa882c98b0-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:20:47.945171 master-1 kubenswrapper[4740]: I1014 13:20:47.944967 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-1" event={"ID":"890e089e-991a-46b8-87ed-22aa882c98b0","Type":"ContainerDied","Data":"a1c655ae79c34ec2f62ab60bdbc6baa8f704941385973a0648a0e6709682a373"} Oct 14 13:20:47.945171 master-1 kubenswrapper[4740]: I1014 13:20:47.945037 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c655ae79c34ec2f62ab60bdbc6baa8f704941385973a0648a0e6709682a373" Oct 14 13:20:47.945171 master-1 kubenswrapper[4740]: I1014 13:20:47.945063 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-1" Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: I1014 13:20:48.964100 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:20:48.964188 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:20:48.967459 master-1 kubenswrapper[4740]: I1014 13:20:48.964214 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:20:51.535092 master-1 kubenswrapper[4740]: I1014 13:20:51.535001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:20:51.536096 master-1 kubenswrapper[4740]: E1014 13:20:51.535358 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker podName:cc579fa5-c1e0-40ed-b1f3-e953a42e74d6 nodeName:}" failed. No retries permitted until 2025-10-14 13:22:53.535308572 +0000 UTC m=+999.345597981 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker") pod "catalogd-controller-manager-596f9d8bbf-wn7c6" (UID: "cc579fa5-c1e0-40ed-b1f3-e953a42e74d6") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:20:51.636475 master-1 kubenswrapper[4740]: I1014 13:20:51.636297 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:20:51.637498 master-1 kubenswrapper[4740]: E1014 13:20:51.637423 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker podName:180ced15-1fb1-464d-85f2-0bcc0d836dab nodeName:}" failed. No retries permitted until 2025-10-14 13:22:53.637374857 +0000 UTC m=+999.447664276 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-docker" (UniqueName: "kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker") pod "operator-controller-controller-manager-668cb7cdc8-lwlfz" (UID: "180ced15-1fb1-464d-85f2-0bcc0d836dab") : hostPath type check failed: /etc/docker is not a directory Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: I1014 13:20:53.964218 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:20:53.964345 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:20:53.967389 master-1 kubenswrapper[4740]: I1014 13:20:53.964351 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:20:54.007634 master-1 kubenswrapper[4740]: E1014 13:20:54.007547 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podUID="cc579fa5-c1e0-40ed-b1f3-e953a42e74d6" Oct 14 13:20:54.007907 master-1 kubenswrapper[4740]: E1014 13:20:54.007642 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-docker], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podUID="180ced15-1fb1-464d-85f2-0bcc0d836dab" Oct 14 13:20:54.861130 master-1 kubenswrapper[4740]: I1014 13:20:54.860977 4740 patch_prober.go:28] interesting pod/route-controller-manager-77674cffc8-k5fvv container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Oct 14 13:20:54.861130 master-1 kubenswrapper[4740]: I1014 13:20:54.861027 4740 patch_prober.go:28] interesting pod/route-controller-manager-77674cffc8-k5fvv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Oct 14 13:20:54.861130 master-1 kubenswrapper[4740]: I1014 13:20:54.861065 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Oct 14 13:20:54.861130 master-1 kubenswrapper[4740]: I1014 13:20:54.861111 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Oct 14 13:20:55.002244 master-1 kubenswrapper[4740]: I1014 13:20:55.002163 4740 generic.go:334] "Generic (PLEG): container finished" podID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerID="61e2daca2897fcccbe37061c0f5b0d2fe210930fbd45f1ce31fa38a3f52c60ff" exitCode=0 Oct 14 13:20:55.002848 master-1 kubenswrapper[4740]: I1014 13:20:55.002207 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerDied","Data":"61e2daca2897fcccbe37061c0f5b0d2fe210930fbd45f1ce31fa38a3f52c60ff"} Oct 14 13:20:55.002848 master-1 kubenswrapper[4740]: I1014 13:20:55.002299 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:20:55.002848 master-1 kubenswrapper[4740]: I1014 13:20:55.002330 4740 scope.go:117] "RemoveContainer" containerID="ffd4998245ebc17a6f03025aacb5ec867c7637eefba8864af77e8d4e546113b1" Oct 14 13:20:55.002848 master-1 kubenswrapper[4740]: I1014 13:20:55.002369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:20:55.003204 master-1 kubenswrapper[4740]: I1014 13:20:55.003150 4740 scope.go:117] "RemoveContainer" containerID="61e2daca2897fcccbe37061c0f5b0d2fe210930fbd45f1ce31fa38a3f52c60ff" Oct 14 13:20:55.003645 master-1 kubenswrapper[4740]: E1014 13:20:55.003583 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-77674cffc8-k5fvv_openshift-route-controller-manager(e4c8f12e-4b62-49eb-a466-af75a571c62f)\"" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: I1014 13:20:58.965359 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:20:58.965460 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:20:58.968832 master-1 kubenswrapper[4740]: I1014 13:20:58.965459 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:20:58.968832 master-1 kubenswrapper[4740]: I1014 13:20:58.965598 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: I1014 13:20:58.972109 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:20:58.972202 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:20:58.974188 master-1 kubenswrapper[4740]: I1014 13:20:58.972258 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: I1014 13:21:03.962776 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:03.962834 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:03.965938 master-1 kubenswrapper[4740]: I1014 13:21:03.964507 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:04.859469 master-1 kubenswrapper[4740]: I1014 13:21:04.859377 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:21:04.859469 master-1 kubenswrapper[4740]: I1014 13:21:04.859460 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:21:04.860169 master-1 kubenswrapper[4740]: I1014 13:21:04.860124 4740 scope.go:117] "RemoveContainer" containerID="61e2daca2897fcccbe37061c0f5b0d2fe210930fbd45f1ce31fa38a3f52c60ff" Oct 14 13:21:05.081111 master-1 kubenswrapper[4740]: I1014 13:21:05.081020 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerStarted","Data":"e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66"} Oct 14 13:21:05.081924 master-1 kubenswrapper[4740]: I1014 13:21:05.081759 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:21:05.083590 master-1 kubenswrapper[4740]: I1014 13:21:05.083541 4740 patch_prober.go:28] interesting pod/route-controller-manager-77674cffc8-k5fvv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Oct 14 13:21:05.083688 master-1 kubenswrapper[4740]: I1014 13:21:05.083601 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Oct 14 13:21:06.096944 master-1 kubenswrapper[4740]: I1014 13:21:06.096734 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: I1014 13:21:08.964203 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:08.964367 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:08.968758 master-1 kubenswrapper[4740]: I1014 13:21:08.964389 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: I1014 13:21:13.963340 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:13.963458 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:13.967192 master-1 kubenswrapper[4740]: I1014 13:21:13.963473 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: I1014 13:21:18.964090 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:18.964200 master-1 kubenswrapper[4740]: I1014 13:21:18.964179 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: I1014 13:21:23.964118 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:23.964207 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:23.968053 master-1 kubenswrapper[4740]: I1014 13:21:23.964276 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: I1014 13:21:28.963272 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:28.963404 master-1 kubenswrapper[4740]: I1014 13:21:28.963383 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:29.931701 master-1 kubenswrapper[4740]: E1014 13:21:29.931616 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:21:29.932378 master-1 kubenswrapper[4740]: E1014 13:21:29.931746 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:23:31.931714286 +0000 UTC m=+1037.742003655 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: I1014 13:21:33.961691 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:33.961788 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:33.963919 master-1 kubenswrapper[4740]: I1014 13:21:33.961791 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: I1014 13:21:38.963860 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:38.963931 master-1 kubenswrapper[4740]: I1014 13:21:38.963936 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: I1014 13:21:43.963706 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:43.963808 master-1 kubenswrapper[4740]: I1014 13:21:43.963798 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:46.432089 master-1 kubenswrapper[4740]: I1014 13:21:46.432026 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf"] Oct 14 13:21:46.433027 master-1 kubenswrapper[4740]: E1014 13:21:46.432372 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890e089e-991a-46b8-87ed-22aa882c98b0" containerName="installer" Oct 14 13:21:46.433027 master-1 kubenswrapper[4740]: I1014 13:21:46.432392 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="890e089e-991a-46b8-87ed-22aa882c98b0" containerName="installer" Oct 14 13:21:46.433027 master-1 kubenswrapper[4740]: I1014 13:21:46.432559 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="890e089e-991a-46b8-87ed-22aa882c98b0" containerName="installer" Oct 14 13:21:46.437362 master-1 kubenswrapper[4740]: I1014 13:21:46.437312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.442031 master-1 kubenswrapper[4740]: I1014 13:21:46.441990 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 13:21:46.442258 master-1 kubenswrapper[4740]: I1014 13:21:46.442172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 13:21:46.443584 master-1 kubenswrapper[4740]: I1014 13:21:46.443545 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 13:21:46.443693 master-1 kubenswrapper[4740]: I1014 13:21:46.443677 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 13:21:46.443761 master-1 kubenswrapper[4740]: I1014 13:21:46.443545 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 14 13:21:46.443834 master-1 kubenswrapper[4740]: I1014 13:21:46.443769 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 13:21:46.443834 master-1 kubenswrapper[4740]: I1014 13:21:46.443821 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 14 13:21:46.443943 master-1 kubenswrapper[4740]: I1014 13:21:46.443865 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 13:21:46.444521 master-1 kubenswrapper[4740]: I1014 13:21:46.444499 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 14 13:21:46.444599 master-1 kubenswrapper[4740]: I1014 13:21:46.444535 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 14 13:21:46.444839 master-1 kubenswrapper[4740]: I1014 13:21:46.444814 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 14 13:21:46.444940 master-1 kubenswrapper[4740]: I1014 13:21:46.444916 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-5vqgl" Oct 14 13:21:46.457376 master-1 kubenswrapper[4740]: I1014 13:21:46.456983 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 14 13:21:46.464352 master-1 kubenswrapper[4740]: I1014 13:21:46.464298 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479158 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-policies\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479197 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479212 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479242 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-session\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479276 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdh9\" (UniqueName: \"kubernetes.io/projected/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-kube-api-access-nbdh9\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479320 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479336 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-dir\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479368 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.479758 master-1 kubenswrapper[4740]: I1014 13:21:46.479386 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.482115 master-1 kubenswrapper[4740]: I1014 13:21:46.482072 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf"] Oct 14 13:21:46.493779 master-1 kubenswrapper[4740]: I1014 13:21:46.493720 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l"] Oct 14 13:21:46.494686 master-1 kubenswrapper[4740]: I1014 13:21:46.494651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:46.498076 master-1 kubenswrapper[4740]: I1014 13:21:46.498041 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xvwmq"] Oct 14 13:21:46.498800 master-1 kubenswrapper[4740]: I1014 13:21:46.498768 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-qhnj6" Oct 14 13:21:46.499463 master-1 kubenswrapper[4740]: I1014 13:21:46.499436 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.500478 master-1 kubenswrapper[4740]: I1014 13:21:46.500435 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Oct 14 13:21:46.501446 master-1 kubenswrapper[4740]: I1014 13:21:46.501371 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc"] Oct 14 13:21:46.501548 master-1 kubenswrapper[4740]: I1014 13:21:46.501399 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 14 13:21:46.501880 master-1 kubenswrapper[4740]: I1014 13:21:46.501793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-svq88" Oct 14 13:21:46.502143 master-1 kubenswrapper[4740]: I1014 13:21:46.502114 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.504995 master-1 kubenswrapper[4740]: I1014 13:21:46.504961 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 14 13:21:46.505163 master-1 kubenswrapper[4740]: I1014 13:21:46.505133 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 14 13:21:46.505368 master-1 kubenswrapper[4740]: I1014 13:21:46.505337 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-68d7l" Oct 14 13:21:46.518141 master-1 kubenswrapper[4740]: I1014 13:21:46.518090 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc"] Oct 14 13:21:46.522743 master-1 kubenswrapper[4740]: I1014 13:21:46.522704 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l"] Oct 14 13:21:46.581003 master-1 kubenswrapper[4740]: I1014 13:21:46.580946 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e37236b2-d620-45d8-985a-913c91466842-host\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581017 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e37236b2-d620-45d8-985a-913c91466842-serviceca\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581039 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f613b5b4-9327-42de-b93a-33746e809ce7-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-f5bnc\" (UID: \"f613b5b4-9327-42de-b93a-33746e809ce7\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f613b5b4-9327-42de-b93a-33746e809ce7-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-f5bnc\" (UID: \"f613b5b4-9327-42de-b93a-33746e809ce7\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581103 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581124 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-policies\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581221 master-1 kubenswrapper[4740]: I1014 13:21:46.581187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-session\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581242 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bf86a34b-7648-4a6e-b4ec-931d2d016dc4-monitoring-plugin-cert\") pod \"monitoring-plugin-75bcf9f5fd-xkw2l\" (UID: \"bf86a34b-7648-4a6e-b4ec-931d2d016dc4\") " pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdh9\" (UniqueName: \"kubernetes.io/projected/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-kube-api-access-nbdh9\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581307 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-dir\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581418 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581436 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581469 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745h2\" (UniqueName: \"kubernetes.io/projected/e37236b2-d620-45d8-985a-913c91466842-kube-api-access-745h2\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581501 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.581557 master-1 kubenswrapper[4740]: I1014 13:21:46.581518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.582720 master-1 kubenswrapper[4740]: I1014 13:21:46.582684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.582789 master-1 kubenswrapper[4740]: I1014 13:21:46.582763 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-dir\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.583355 master-1 kubenswrapper[4740]: I1014 13:21:46.583309 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.583906 master-1 kubenswrapper[4740]: I1014 13:21:46.583871 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-policies\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.583959 master-1 kubenswrapper[4740]: I1014 13:21:46.583899 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.586098 master-1 kubenswrapper[4740]: I1014 13:21:46.586050 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.586183 master-1 kubenswrapper[4740]: I1014 13:21:46.586130 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.586358 master-1 kubenswrapper[4740]: I1014 13:21:46.586319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-session\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.586418 master-1 kubenswrapper[4740]: I1014 13:21:46.586370 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.586516 master-1 kubenswrapper[4740]: I1014 13:21:46.586488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-login\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.586708 master-1 kubenswrapper[4740]: I1014 13:21:46.586661 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.587536 master-1 kubenswrapper[4740]: I1014 13:21:46.587496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-error\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.614662 master-1 kubenswrapper[4740]: I1014 13:21:46.614611 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdh9\" (UniqueName: \"kubernetes.io/projected/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-kube-api-access-nbdh9\") pod \"oauth-openshift-6ddc4f49f9-thnnf\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.682218 master-1 kubenswrapper[4740]: I1014 13:21:46.682095 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bf86a34b-7648-4a6e-b4ec-931d2d016dc4-monitoring-plugin-cert\") pod \"monitoring-plugin-75bcf9f5fd-xkw2l\" (UID: \"bf86a34b-7648-4a6e-b4ec-931d2d016dc4\") " pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:46.682218 master-1 kubenswrapper[4740]: I1014 13:21:46.682175 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745h2\" (UniqueName: \"kubernetes.io/projected/e37236b2-d620-45d8-985a-913c91466842-kube-api-access-745h2\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.682218 master-1 kubenswrapper[4740]: I1014 13:21:46.682206 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e37236b2-d620-45d8-985a-913c91466842-host\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.682543 master-1 kubenswrapper[4740]: I1014 13:21:46.682247 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e37236b2-d620-45d8-985a-913c91466842-serviceca\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.682543 master-1 kubenswrapper[4740]: I1014 13:21:46.682263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f613b5b4-9327-42de-b93a-33746e809ce7-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-f5bnc\" (UID: \"f613b5b4-9327-42de-b93a-33746e809ce7\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.682543 master-1 kubenswrapper[4740]: I1014 13:21:46.682278 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f613b5b4-9327-42de-b93a-33746e809ce7-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-f5bnc\" (UID: \"f613b5b4-9327-42de-b93a-33746e809ce7\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.682543 master-1 kubenswrapper[4740]: I1014 13:21:46.682445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e37236b2-d620-45d8-985a-913c91466842-host\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.683212 master-1 kubenswrapper[4740]: I1014 13:21:46.683164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f613b5b4-9327-42de-b93a-33746e809ce7-nginx-conf\") pod \"networking-console-plugin-85df6bdd68-f5bnc\" (UID: \"f613b5b4-9327-42de-b93a-33746e809ce7\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.683293 master-1 kubenswrapper[4740]: I1014 13:21:46.683264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e37236b2-d620-45d8-985a-913c91466842-serviceca\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.687243 master-1 kubenswrapper[4740]: I1014 13:21:46.687184 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bf86a34b-7648-4a6e-b4ec-931d2d016dc4-monitoring-plugin-cert\") pod \"monitoring-plugin-75bcf9f5fd-xkw2l\" (UID: \"bf86a34b-7648-4a6e-b4ec-931d2d016dc4\") " pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:46.687319 master-1 kubenswrapper[4740]: I1014 13:21:46.687188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f613b5b4-9327-42de-b93a-33746e809ce7-networking-console-plugin-cert\") pod \"networking-console-plugin-85df6bdd68-f5bnc\" (UID: \"f613b5b4-9327-42de-b93a-33746e809ce7\") " pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.705305 master-1 kubenswrapper[4740]: I1014 13:21:46.705262 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745h2\" (UniqueName: \"kubernetes.io/projected/e37236b2-d620-45d8-985a-913c91466842-kube-api-access-745h2\") pod \"node-ca-xvwmq\" (UID: \"e37236b2-d620-45d8-985a-913c91466842\") " pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.751610 master-1 kubenswrapper[4740]: I1014 13:21:46.751539 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:46.809699 master-1 kubenswrapper[4740]: I1014 13:21:46.809636 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:46.831334 master-1 kubenswrapper[4740]: I1014 13:21:46.831291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xvwmq" Oct 14 13:21:46.837121 master-1 kubenswrapper[4740]: I1014 13:21:46.837086 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" Oct 14 13:21:46.872894 master-1 kubenswrapper[4740]: I1014 13:21:46.867197 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:21:47.233600 master-1 kubenswrapper[4740]: I1014 13:21:47.233531 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf"] Oct 14 13:21:47.241191 master-1 kubenswrapper[4740]: W1014 13:21:47.241147 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90f36641_2c8a_4c3f_83c6_3ff25d86d52e.slice/crio-ad2501a5b6dfd9843afca7050825cc4de7b2bfbe4b1ad3bdf2add43879d1f231 WatchSource:0}: Error finding container ad2501a5b6dfd9843afca7050825cc4de7b2bfbe4b1ad3bdf2add43879d1f231: Status 404 returned error can't find the container with id ad2501a5b6dfd9843afca7050825cc4de7b2bfbe4b1ad3bdf2add43879d1f231 Oct 14 13:21:47.296860 master-1 kubenswrapper[4740]: I1014 13:21:47.296789 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l"] Oct 14 13:21:47.300989 master-1 kubenswrapper[4740]: W1014 13:21:47.300909 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf86a34b_7648_4a6e_b4ec_931d2d016dc4.slice/crio-241faab6a9742d54d9b10d1a4662ea3887e37ea0927e1500b3e782746f3b795e WatchSource:0}: Error finding container 241faab6a9742d54d9b10d1a4662ea3887e37ea0927e1500b3e782746f3b795e: Status 404 returned error can't find the container with id 241faab6a9742d54d9b10d1a4662ea3887e37ea0927e1500b3e782746f3b795e Oct 14 13:21:47.379044 master-1 kubenswrapper[4740]: I1014 13:21:47.378983 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc"] Oct 14 13:21:47.386624 master-1 kubenswrapper[4740]: W1014 13:21:47.386535 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf613b5b4_9327_42de_b93a_33746e809ce7.slice/crio-701c7f92c55ec34c8911fae0a401f3b59cae62a871581c19b2ab4e1274cfe105 WatchSource:0}: Error finding container 701c7f92c55ec34c8911fae0a401f3b59cae62a871581c19b2ab4e1274cfe105: Status 404 returned error can't find the container with id 701c7f92c55ec34c8911fae0a401f3b59cae62a871581c19b2ab4e1274cfe105 Oct 14 13:21:47.408330 master-1 kubenswrapper[4740]: I1014 13:21:47.408220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xvwmq" event={"ID":"e37236b2-d620-45d8-985a-913c91466842","Type":"ContainerStarted","Data":"c175a347c2f95b4079d71df3108b79d2f3b81a030e783b26c349ec336a352c6c"} Oct 14 13:21:47.412872 master-1 kubenswrapper[4740]: I1014 13:21:47.412809 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" event={"ID":"bf86a34b-7648-4a6e-b4ec-931d2d016dc4","Type":"ContainerStarted","Data":"241faab6a9742d54d9b10d1a4662ea3887e37ea0927e1500b3e782746f3b795e"} Oct 14 13:21:47.417194 master-1 kubenswrapper[4740]: I1014 13:21:47.417108 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" event={"ID":"f613b5b4-9327-42de-b93a-33746e809ce7","Type":"ContainerStarted","Data":"701c7f92c55ec34c8911fae0a401f3b59cae62a871581c19b2ab4e1274cfe105"} Oct 14 13:21:47.420309 master-1 kubenswrapper[4740]: I1014 13:21:47.420198 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" event={"ID":"90f36641-2c8a-4c3f-83c6-3ff25d86d52e","Type":"ContainerStarted","Data":"ad2501a5b6dfd9843afca7050825cc4de7b2bfbe4b1ad3bdf2add43879d1f231"} Oct 14 13:21:48.971261 master-1 kubenswrapper[4740]: I1014 13:21:48.971196 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm"] Oct 14 13:21:48.972775 master-1 kubenswrapper[4740]: I1014 13:21:48.972680 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:48.977811 master-1 kubenswrapper[4740]: I1014 13:21:48.976764 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-l545q" Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: I1014 13:21:48.979497 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:48.979545 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:48.982150 master-1 kubenswrapper[4740]: I1014 13:21:48.979565 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:48.983334 master-1 kubenswrapper[4740]: I1014 13:21:48.983296 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm"] Oct 14 13:21:49.016670 master-1 kubenswrapper[4740]: I1014 13:21:49.015318 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfc9786f-a073-4a63-8b8f-8267f7cff3ef-webhook-certs\") pod \"multus-admission-controller-6bc7c56dc6-4dpkm\" (UID: \"bfc9786f-a073-4a63-8b8f-8267f7cff3ef\") " pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:49.016670 master-1 kubenswrapper[4740]: I1014 13:21:49.015392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mcl\" (UniqueName: \"kubernetes.io/projected/bfc9786f-a073-4a63-8b8f-8267f7cff3ef-kube-api-access-d9mcl\") pod \"multus-admission-controller-6bc7c56dc6-4dpkm\" (UID: \"bfc9786f-a073-4a63-8b8f-8267f7cff3ef\") " pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:49.118333 master-1 kubenswrapper[4740]: I1014 13:21:49.118218 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfc9786f-a073-4a63-8b8f-8267f7cff3ef-webhook-certs\") pod \"multus-admission-controller-6bc7c56dc6-4dpkm\" (UID: \"bfc9786f-a073-4a63-8b8f-8267f7cff3ef\") " pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:49.118333 master-1 kubenswrapper[4740]: I1014 13:21:49.118319 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mcl\" (UniqueName: \"kubernetes.io/projected/bfc9786f-a073-4a63-8b8f-8267f7cff3ef-kube-api-access-d9mcl\") pod \"multus-admission-controller-6bc7c56dc6-4dpkm\" (UID: \"bfc9786f-a073-4a63-8b8f-8267f7cff3ef\") " pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:49.123383 master-1 kubenswrapper[4740]: I1014 13:21:49.123339 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfc9786f-a073-4a63-8b8f-8267f7cff3ef-webhook-certs\") pod \"multus-admission-controller-6bc7c56dc6-4dpkm\" (UID: \"bfc9786f-a073-4a63-8b8f-8267f7cff3ef\") " pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:49.137569 master-1 kubenswrapper[4740]: I1014 13:21:49.137502 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mcl\" (UniqueName: \"kubernetes.io/projected/bfc9786f-a073-4a63-8b8f-8267f7cff3ef-kube-api-access-d9mcl\") pod \"multus-admission-controller-6bc7c56dc6-4dpkm\" (UID: \"bfc9786f-a073-4a63-8b8f-8267f7cff3ef\") " pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:49.295789 master-1 kubenswrapper[4740]: I1014 13:21:49.295711 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" Oct 14 13:21:50.735921 master-1 kubenswrapper[4740]: I1014 13:21:50.735843 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm"] Oct 14 13:21:50.742131 master-1 kubenswrapper[4740]: W1014 13:21:50.742005 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc9786f_a073_4a63_8b8f_8267f7cff3ef.slice/crio-da53394fa41b0a5eb946add7fcdf25cfa0f7b152c2dd3a130677715e01dc61c4 WatchSource:0}: Error finding container da53394fa41b0a5eb946add7fcdf25cfa0f7b152c2dd3a130677715e01dc61c4: Status 404 returned error can't find the container with id da53394fa41b0a5eb946add7fcdf25cfa0f7b152c2dd3a130677715e01dc61c4 Oct 14 13:21:51.447513 master-1 kubenswrapper[4740]: I1014 13:21:51.447448 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" event={"ID":"f613b5b4-9327-42de-b93a-33746e809ce7","Type":"ContainerStarted","Data":"4fed65e9aac56232e89498549309995b789db24e54319ca5ee6bd736570f2fe7"} Oct 14 13:21:51.449096 master-1 kubenswrapper[4740]: I1014 13:21:51.449042 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" event={"ID":"90f36641-2c8a-4c3f-83c6-3ff25d86d52e","Type":"ContainerStarted","Data":"9b65a048ae7111360fb7f1062f39927fa58d6a586b76d6fe08a7abd7c74df1f4"} Oct 14 13:21:51.449799 master-1 kubenswrapper[4740]: I1014 13:21:51.449756 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:51.453276 master-1 kubenswrapper[4740]: I1014 13:21:51.453215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xvwmq" event={"ID":"e37236b2-d620-45d8-985a-913c91466842","Type":"ContainerStarted","Data":"5cd725ef29fe3c227043090c0656f7d49d1d79b074e6118964f9fcea28baf6dc"} Oct 14 13:21:51.457644 master-1 kubenswrapper[4740]: I1014 13:21:51.457599 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-65bb9777fc-bm4pw"] Oct 14 13:21:51.458561 master-1 kubenswrapper[4740]: I1014 13:21:51.458531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:21:51.459499 master-1 kubenswrapper[4740]: I1014 13:21:51.459471 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:21:51.460454 master-1 kubenswrapper[4740]: I1014 13:21:51.460406 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-tq8pv" Oct 14 13:21:51.460721 master-1 kubenswrapper[4740]: I1014 13:21:51.460668 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" event={"ID":"bfc9786f-a073-4a63-8b8f-8267f7cff3ef","Type":"ContainerStarted","Data":"b087f9f65cc85b6aa9454022b94ebe587ccf6dba84d6d231fd4690fcc1362e75"} Oct 14 13:21:51.460790 master-1 kubenswrapper[4740]: I1014 13:21:51.460732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" event={"ID":"bfc9786f-a073-4a63-8b8f-8267f7cff3ef","Type":"ContainerStarted","Data":"8e040fe3e207c6ed45d95b327a865d5ea72b73d5595aa5de9a9ef05ac8f7f42a"} Oct 14 13:21:51.460790 master-1 kubenswrapper[4740]: I1014 13:21:51.460750 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" event={"ID":"bfc9786f-a073-4a63-8b8f-8267f7cff3ef","Type":"ContainerStarted","Data":"da53394fa41b0a5eb946add7fcdf25cfa0f7b152c2dd3a130677715e01dc61c4"} Oct 14 13:21:51.460790 master-1 kubenswrapper[4740]: I1014 13:21:51.460704 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 14 13:21:51.460790 master-1 kubenswrapper[4740]: I1014 13:21:51.460734 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 14 13:21:51.462274 master-1 kubenswrapper[4740]: I1014 13:21:51.462226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" event={"ID":"bf86a34b-7648-4a6e-b4ec-931d2d016dc4","Type":"ContainerStarted","Data":"f45d37fe230a2796ed9b4a277873f8f7fde5bdc86be4f390bc3704c473256095"} Oct 14 13:21:51.464154 master-1 kubenswrapper[4740]: I1014 13:21:51.464124 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:51.469111 master-1 kubenswrapper[4740]: I1014 13:21:51.469057 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" Oct 14 13:21:51.475733 master-1 kubenswrapper[4740]: I1014 13:21:51.475686 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc" podStartSLOduration=107.561958288 podStartE2EDuration="1m50.475675947s" podCreationTimestamp="2025-10-14 13:20:01 +0000 UTC" firstStartedPulling="2025-10-14 13:21:47.390898008 +0000 UTC m=+933.201187387" lastFinishedPulling="2025-10-14 13:21:50.304615717 +0000 UTC m=+936.114905046" observedRunningTime="2025-10-14 13:21:51.475118103 +0000 UTC m=+937.285407432" watchObservedRunningTime="2025-10-14 13:21:51.475675947 +0000 UTC m=+937.285965276" Oct 14 13:21:51.483353 master-1 kubenswrapper[4740]: I1014 13:21:51.483291 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-bm4pw"] Oct 14 13:21:51.500657 master-1 kubenswrapper[4740]: I1014 13:21:51.500575 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l" podStartSLOduration=181.501145275 podStartE2EDuration="3m4.500554336s" podCreationTimestamp="2025-10-14 13:18:47 +0000 UTC" firstStartedPulling="2025-10-14 13:21:47.305279428 +0000 UTC m=+933.115568797" lastFinishedPulling="2025-10-14 13:21:50.304688529 +0000 UTC m=+936.114977858" observedRunningTime="2025-10-14 13:21:51.49845569 +0000 UTC m=+937.308745039" watchObservedRunningTime="2025-10-14 13:21:51.500554336 +0000 UTC m=+937.310843665" Oct 14 13:21:51.519332 master-1 kubenswrapper[4740]: I1014 13:21:51.519221 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm" podStartSLOduration=3.519203991 podStartE2EDuration="3.519203991s" podCreationTimestamp="2025-10-14 13:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:21:51.518964725 +0000 UTC m=+937.329254084" watchObservedRunningTime="2025-10-14 13:21:51.519203991 +0000 UTC m=+937.329493330" Oct 14 13:21:51.546299 master-1 kubenswrapper[4740]: I1014 13:21:51.546134 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" podStartSLOduration=199.475447185 podStartE2EDuration="3m22.546115892s" podCreationTimestamp="2025-10-14 13:18:29 +0000 UTC" firstStartedPulling="2025-10-14 13:21:47.244780262 +0000 UTC m=+933.055069631" lastFinishedPulling="2025-10-14 13:21:50.315449009 +0000 UTC m=+936.125738338" observedRunningTime="2025-10-14 13:21:51.54334301 +0000 UTC m=+937.353632349" watchObservedRunningTime="2025-10-14 13:21:51.546115892 +0000 UTC m=+937.356405231" Oct 14 13:21:51.556083 master-1 kubenswrapper[4740]: I1014 13:21:51.554209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrtx6\" (UniqueName: \"kubernetes.io/projected/a32f08cc-7db7-455b-b904-e74aef3a165a-kube-api-access-hrtx6\") pod \"downloads-65bb9777fc-bm4pw\" (UID: \"a32f08cc-7db7-455b-b904-e74aef3a165a\") " pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:21:51.566020 master-1 kubenswrapper[4740]: I1014 13:21:51.563370 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b"] Oct 14 13:21:51.566020 master-1 kubenswrapper[4740]: I1014 13:21:51.563611 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="multus-admission-controller" containerID="cri-o://fe0e49ced70217b96835378cb2e4d66dc3f26f4f71857ad6f8c660fb548cbfcb" gracePeriod=30 Oct 14 13:21:51.566020 master-1 kubenswrapper[4740]: I1014 13:21:51.563732 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="kube-rbac-proxy" containerID="cri-o://440e19c3852cce8cff9d2a27938ed42d68f52d44868ed579ebaf8cd8b1e09955" gracePeriod=30 Oct 14 13:21:51.570041 master-1 kubenswrapper[4740]: I1014 13:21:51.569835 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xvwmq" podStartSLOduration=107.132489553 podStartE2EDuration="1m50.569806729s" podCreationTimestamp="2025-10-14 13:20:01 +0000 UTC" firstStartedPulling="2025-10-14 13:21:46.867109416 +0000 UTC m=+932.677398745" lastFinishedPulling="2025-10-14 13:21:50.304426582 +0000 UTC m=+936.114715921" observedRunningTime="2025-10-14 13:21:51.566965435 +0000 UTC m=+937.377254804" watchObservedRunningTime="2025-10-14 13:21:51.569806729 +0000 UTC m=+937.380096058" Oct 14 13:21:51.663414 master-1 kubenswrapper[4740]: I1014 13:21:51.663317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrtx6\" (UniqueName: \"kubernetes.io/projected/a32f08cc-7db7-455b-b904-e74aef3a165a-kube-api-access-hrtx6\") pod \"downloads-65bb9777fc-bm4pw\" (UID: \"a32f08cc-7db7-455b-b904-e74aef3a165a\") " pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:21:51.687131 master-1 kubenswrapper[4740]: I1014 13:21:51.687074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrtx6\" (UniqueName: \"kubernetes.io/projected/a32f08cc-7db7-455b-b904-e74aef3a165a-kube-api-access-hrtx6\") pod \"downloads-65bb9777fc-bm4pw\" (UID: \"a32f08cc-7db7-455b-b904-e74aef3a165a\") " pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:21:51.779551 master-1 kubenswrapper[4740]: I1014 13:21:51.779476 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:21:52.259546 master-1 kubenswrapper[4740]: I1014 13:21:52.259468 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-65bb9777fc-bm4pw"] Oct 14 13:21:52.473383 master-1 kubenswrapper[4740]: I1014 13:21:52.473263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-bm4pw" event={"ID":"a32f08cc-7db7-455b-b904-e74aef3a165a","Type":"ContainerStarted","Data":"fc5fd83a6818defd381e9726b535a96a2bb16140e1f9e5aef172d8de2cfc549a"} Oct 14 13:21:52.476016 master-1 kubenswrapper[4740]: I1014 13:21:52.475948 4740 generic.go:334] "Generic (PLEG): container finished" podID="819cb927-5174-4df8-a723-cc07e53d9044" containerID="440e19c3852cce8cff9d2a27938ed42d68f52d44868ed579ebaf8cd8b1e09955" exitCode=0 Oct 14 13:21:52.476192 master-1 kubenswrapper[4740]: I1014 13:21:52.476089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" event={"ID":"819cb927-5174-4df8-a723-cc07e53d9044","Type":"ContainerDied","Data":"440e19c3852cce8cff9d2a27938ed42d68f52d44868ed579ebaf8cd8b1e09955"} Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: I1014 13:21:53.961200 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:21:53.961318 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:21:53.964039 master-1 kubenswrapper[4740]: I1014 13:21:53.961343 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:21:54.312547 master-1 kubenswrapper[4740]: I1014 13:21:54.312491 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-668956f9dd-mlrd8"] Oct 14 13:21:54.313787 master-1 kubenswrapper[4740]: I1014 13:21:54.313755 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.317164 master-1 kubenswrapper[4740]: I1014 13:21:54.317129 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 13:21:54.317484 master-1 kubenswrapper[4740]: I1014 13:21:54.317458 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 13:21:54.317715 master-1 kubenswrapper[4740]: I1014 13:21:54.317684 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r2r7j" Oct 14 13:21:54.323936 master-1 kubenswrapper[4740]: I1014 13:21:54.323897 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 13:21:54.324082 master-1 kubenswrapper[4740]: I1014 13:21:54.323939 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 13:21:54.324082 master-1 kubenswrapper[4740]: I1014 13:21:54.323898 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 13:21:54.334654 master-1 kubenswrapper[4740]: I1014 13:21:54.334597 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-668956f9dd-mlrd8"] Oct 14 13:21:54.409396 master-1 kubenswrapper[4740]: I1014 13:21:54.409332 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-serving-cert\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.409396 master-1 kubenswrapper[4740]: I1014 13:21:54.409382 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-config\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.409396 master-1 kubenswrapper[4740]: I1014 13:21:54.409404 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7fk\" (UniqueName: \"kubernetes.io/projected/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-kube-api-access-wm7fk\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.409728 master-1 kubenswrapper[4740]: I1014 13:21:54.409445 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-oauth-serving-cert\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.409728 master-1 kubenswrapper[4740]: I1014 13:21:54.409549 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-oauth-config\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.409728 master-1 kubenswrapper[4740]: I1014 13:21:54.409624 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-service-ca\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.510731 master-1 kubenswrapper[4740]: I1014 13:21:54.510658 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-serving-cert\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.510731 master-1 kubenswrapper[4740]: I1014 13:21:54.510731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-config\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.511030 master-1 kubenswrapper[4740]: I1014 13:21:54.510770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7fk\" (UniqueName: \"kubernetes.io/projected/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-kube-api-access-wm7fk\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.511030 master-1 kubenswrapper[4740]: I1014 13:21:54.510852 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-oauth-serving-cert\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.511030 master-1 kubenswrapper[4740]: I1014 13:21:54.510930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-oauth-config\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.511030 master-1 kubenswrapper[4740]: I1014 13:21:54.510976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-service-ca\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.512268 master-1 kubenswrapper[4740]: I1014 13:21:54.512220 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-oauth-serving-cert\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.512268 master-1 kubenswrapper[4740]: I1014 13:21:54.512246 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-config\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.512436 master-1 kubenswrapper[4740]: I1014 13:21:54.512359 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-service-ca\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.515015 master-1 kubenswrapper[4740]: I1014 13:21:54.514980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-serving-cert\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.521857 master-1 kubenswrapper[4740]: I1014 13:21:54.521829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-oauth-config\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.534073 master-1 kubenswrapper[4740]: I1014 13:21:54.534035 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7fk\" (UniqueName: \"kubernetes.io/projected/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-kube-api-access-wm7fk\") pod \"console-668956f9dd-mlrd8\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:54.637708 master-1 kubenswrapper[4740]: I1014 13:21:54.637565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:21:55.131718 master-1 kubenswrapper[4740]: I1014 13:21:55.131662 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-668956f9dd-mlrd8"] Oct 14 13:21:55.139151 master-1 kubenswrapper[4740]: W1014 13:21:55.139092 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a83514f_e8a3_4a35_aaa4_cc530166fc2f.slice/crio-09d264ac75e76a234bfd604dd8a9108f6dd703393cb192c91624d7f9d9e426ed WatchSource:0}: Error finding container 09d264ac75e76a234bfd604dd8a9108f6dd703393cb192c91624d7f9d9e426ed: Status 404 returned error can't find the container with id 09d264ac75e76a234bfd604dd8a9108f6dd703393cb192c91624d7f9d9e426ed Oct 14 13:21:55.498411 master-1 kubenswrapper[4740]: I1014 13:21:55.498339 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668956f9dd-mlrd8" event={"ID":"9a83514f-e8a3-4a35-aaa4-cc530166fc2f","Type":"ContainerStarted","Data":"09d264ac75e76a234bfd604dd8a9108f6dd703393cb192c91624d7f9d9e426ed"} Oct 14 13:21:58.304685 master-1 kubenswrapper[4740]: I1014 13:21:58.304613 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-1"] Oct 14 13:21:58.305507 master-1 kubenswrapper[4740]: I1014 13:21:58.305397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.312142 master-1 kubenswrapper[4740]: I1014 13:21:58.308288 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-bm6wx" Oct 14 13:21:58.319309 master-1 kubenswrapper[4740]: I1014 13:21:58.317532 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-1"] Oct 14 13:21:58.394425 master-1 kubenswrapper[4740]: I1014 13:21:58.394349 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.394702 master-1 kubenswrapper[4740]: I1014 13:21:58.394460 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.495818 master-1 kubenswrapper[4740]: I1014 13:21:58.495766 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.495818 master-1 kubenswrapper[4740]: I1014 13:21:58.495820 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.496047 master-1 kubenswrapper[4740]: I1014 13:21:58.495898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.524349 master-1 kubenswrapper[4740]: I1014 13:21:58.522392 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 14 13:21:58.524349 master-1 kubenswrapper[4740]: I1014 13:21:58.523507 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.529791 master-1 kubenswrapper[4740]: I1014 13:21:58.529711 4740 generic.go:334] "Generic (PLEG): container finished" podID="e39186c2ebd02622803bdbec6984de2a" containerID="4ce9abd39c3aeaad89568cd60fb0e427f27d0f38adcdff7f77bef90692c33338" exitCode=0 Oct 14 13:21:58.627042 master-1 kubenswrapper[4740]: I1014 13:21:58.626889 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:21:58.886743 master-1 kubenswrapper[4740]: I1014 13:21:58.886637 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 14 13:21:58.889512 master-1 kubenswrapper[4740]: I1014 13:21:58.889499 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:58.892000 master-1 kubenswrapper[4740]: I1014 13:21:58.891965 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Oct 14 13:21:58.893485 master-1 kubenswrapper[4740]: I1014 13:21:58.893366 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Oct 14 13:21:58.893485 master-1 kubenswrapper[4740]: I1014 13:21:58.893436 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Oct 14 13:21:58.895560 master-1 kubenswrapper[4740]: I1014 13:21:58.895418 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Oct 14 13:21:58.895560 master-1 kubenswrapper[4740]: I1014 13:21:58.895508 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Oct 14 13:21:58.895927 master-1 kubenswrapper[4740]: I1014 13:21:58.895732 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Oct 14 13:21:58.896309 master-1 kubenswrapper[4740]: I1014 13:21:58.896089 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-z89cl" Oct 14 13:21:58.896865 master-1 kubenswrapper[4740]: I1014 13:21:58.896853 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Oct 14 13:21:58.900649 master-1 kubenswrapper[4740]: I1014 13:21:58.900618 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Oct 14 13:21:58.911392 master-1 kubenswrapper[4740]: I1014 13:21:58.911355 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 14 13:21:58.964204 master-1 kubenswrapper[4740]: I1014 13:21:58.964082 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:21:58.964204 master-1 kubenswrapper[4740]: I1014 13:21:58.964163 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:21:59.002581 master-1 kubenswrapper[4740]: I1014 13:21:59.002504 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-web-config\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.002769 master-1 kubenswrapper[4740]: I1014 13:21:59.002641 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.002769 master-1 kubenswrapper[4740]: I1014 13:21:59.002716 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-config-volume\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.002859 master-1 kubenswrapper[4740]: I1014 13:21:59.002775 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.002859 master-1 kubenswrapper[4740]: I1014 13:21:59.002847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.002958 master-1 kubenswrapper[4740]: I1014 13:21:59.002924 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-config-out\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.003007 master-1 kubenswrapper[4740]: I1014 13:21:59.002976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-tls-assets\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.003043 master-1 kubenswrapper[4740]: I1014 13:21:59.003007 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.004767 master-1 kubenswrapper[4740]: I1014 13:21:59.004526 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.004767 master-1 kubenswrapper[4740]: I1014 13:21:59.004704 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.004863 master-1 kubenswrapper[4740]: I1014 13:21:59.004822 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2kd\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-kube-api-access-hv2kd\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.004949 master-1 kubenswrapper[4740]: I1014 13:21:59.004909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.101398 master-1 kubenswrapper[4740]: I1014 13:21:59.101338 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 14 13:21:59.102357 master-1 kubenswrapper[4740]: I1014 13:21:59.102330 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:21:59.106092 master-1 kubenswrapper[4740]: I1014 13:21:59.106053 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-config-out\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-tls-assets\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106157 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106177 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2kd\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-kube-api-access-hv2kd\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106198 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-web-config\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106287 master-1 kubenswrapper[4740]: I1014 13:21:59.106294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106528 master-1 kubenswrapper[4740]: I1014 13:21:59.106327 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-config-volume\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106528 master-1 kubenswrapper[4740]: I1014 13:21:59.106371 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.106528 master-1 kubenswrapper[4740]: I1014 13:21:59.106407 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.107616 master-1 kubenswrapper[4740]: I1014 13:21:59.107150 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 14 13:21:59.107616 master-1 kubenswrapper[4740]: I1014 13:21:59.107437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-main-db\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.108222 master-1 kubenswrapper[4740]: I1014 13:21:59.108182 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-metrics-client-ca\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.108905 master-1 kubenswrapper[4740]: I1014 13:21:59.108768 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.110599 master-1 kubenswrapper[4740]: I1014 13:21:59.110574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.110683 master-1 kubenswrapper[4740]: I1014 13:21:59.110640 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-tls-assets\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.110921 master-1 kubenswrapper[4740]: I1014 13:21:59.110826 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.110921 master-1 kubenswrapper[4740]: I1014 13:21:59.110845 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.110921 master-1 kubenswrapper[4740]: I1014 13:21:59.110859 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-config-out\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.111506 master-1 kubenswrapper[4740]: I1014 13:21:59.111464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-web-config\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.111565 master-1 kubenswrapper[4740]: I1014 13:21:59.111508 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-main-tls\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.112105 master-1 kubenswrapper[4740]: I1014 13:21:59.112006 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-config-volume\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.132039 master-1 kubenswrapper[4740]: I1014 13:21:59.131983 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2kd\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-kube-api-access-hv2kd\") pod \"alertmanager-main-1\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.207718 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") pod \"e39186c2ebd02622803bdbec6984de2a\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.207769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") pod \"e39186c2ebd02622803bdbec6984de2a\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.207806 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") pod \"e39186c2ebd02622803bdbec6984de2a\" (UID: \"e39186c2ebd02622803bdbec6984de2a\") " Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.207867 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e39186c2ebd02622803bdbec6984de2a" (UID: "e39186c2ebd02622803bdbec6984de2a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.207919 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e39186c2ebd02622803bdbec6984de2a" (UID: "e39186c2ebd02622803bdbec6984de2a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.207951 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e39186c2ebd02622803bdbec6984de2a" (UID: "e39186c2ebd02622803bdbec6984de2a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.208339 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.208354 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:21:59.212095 master-1 kubenswrapper[4740]: I1014 13:21:59.208383 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e39186c2ebd02622803bdbec6984de2a-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:21:59.212528 master-1 kubenswrapper[4740]: I1014 13:21:59.212278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:21:59.458858 master-1 kubenswrapper[4740]: I1014 13:21:59.458732 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-1"] Oct 14 13:21:59.472906 master-1 kubenswrapper[4740]: W1014 13:21:59.472854 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf20d58b_20fc_4fed_b18b_daf1bdc0665e.slice/crio-7be171fc0fe65eb379d06886f61135096a668674870b9316bb601825e33c4a7a WatchSource:0}: Error finding container 7be171fc0fe65eb379d06886f61135096a668674870b9316bb601825e33c4a7a: Status 404 returned error can't find the container with id 7be171fc0fe65eb379d06886f61135096a668674870b9316bb601825e33c4a7a Oct 14 13:21:59.539335 master-1 kubenswrapper[4740]: I1014 13:21:59.539243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"af20d58b-20fc-4fed-b18b-daf1bdc0665e","Type":"ContainerStarted","Data":"7be171fc0fe65eb379d06886f61135096a668674870b9316bb601825e33c4a7a"} Oct 14 13:21:59.542192 master-1 kubenswrapper[4740]: I1014 13:21:59.542137 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_e39186c2ebd02622803bdbec6984de2a/kube-apiserver-cert-syncer/0.log" Oct 14 13:21:59.542985 master-1 kubenswrapper[4740]: I1014 13:21:59.542941 4740 scope.go:117] "RemoveContainer" containerID="e45346c1521e16aa358a9e0243b29f57c340b98cd05f02aa4089f7ed3a6ef8d0" Oct 14 13:21:59.543094 master-1 kubenswrapper[4740]: I1014 13:21:59.543055 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:21:59.547564 master-1 kubenswrapper[4740]: I1014 13:21:59.547385 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668956f9dd-mlrd8" event={"ID":"9a83514f-e8a3-4a35-aaa4-cc530166fc2f","Type":"ContainerStarted","Data":"e39245116eb198b69028ed732077ffccaa450a3f2e0c328aea1700b8957f8d11"} Oct 14 13:21:59.551632 master-1 kubenswrapper[4740]: I1014 13:21:59.551564 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 14 13:21:59.563170 master-1 kubenswrapper[4740]: I1014 13:21:59.563130 4740 scope.go:117] "RemoveContainer" containerID="f8f2db597279287746568152e8aa7a3e94b07b8fc1075f744d7794b4d682afbc" Oct 14 13:21:59.580056 master-1 kubenswrapper[4740]: I1014 13:21:59.579975 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-668956f9dd-mlrd8" podStartSLOduration=1.619549707 podStartE2EDuration="5.579953047s" podCreationTimestamp="2025-10-14 13:21:54 +0000 UTC" firstStartedPulling="2025-10-14 13:21:55.14089347 +0000 UTC m=+940.951182799" lastFinishedPulling="2025-10-14 13:21:59.10129681 +0000 UTC m=+944.911586139" observedRunningTime="2025-10-14 13:21:59.577100832 +0000 UTC m=+945.387390161" watchObservedRunningTime="2025-10-14 13:21:59.579953047 +0000 UTC m=+945.390242376" Oct 14 13:21:59.582588 master-1 kubenswrapper[4740]: I1014 13:21:59.582546 4740 scope.go:117] "RemoveContainer" containerID="d5925b84c60ef6f3443add991b592427b7d32ac6283cdca5542873b4676b09d9" Oct 14 13:21:59.583297 master-1 kubenswrapper[4740]: I1014 13:21:59.583239 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="e39186c2ebd02622803bdbec6984de2a" podUID="42d61efaa0f96869cf2939026aad6022" Oct 14 13:21:59.614157 master-1 kubenswrapper[4740]: I1014 13:21:59.614118 4740 scope.go:117] "RemoveContainer" containerID="3f98f494037c823a91c8e5e8cb3c5e66596570a1d3b528a3c2d4edd5aa660c69" Oct 14 13:21:59.631785 master-1 kubenswrapper[4740]: I1014 13:21:59.631739 4740 scope.go:117] "RemoveContainer" containerID="4ce9abd39c3aeaad89568cd60fb0e427f27d0f38adcdff7f77bef90692c33338" Oct 14 13:21:59.655510 master-1 kubenswrapper[4740]: I1014 13:21:59.655465 4740 scope.go:117] "RemoveContainer" containerID="0b4d74993a1401e4b6e850b179ab51065f53ea80ad8756c8a740b78b0804b4e2" Oct 14 13:21:59.681095 master-1 kubenswrapper[4740]: I1014 13:21:59.681050 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 14 13:21:59.700411 master-1 kubenswrapper[4740]: W1014 13:21:59.700364 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e010854_ec42_42d1_8865_0fe4c78214ef.slice/crio-d2afd0bccf90d81ae2b279c246a03cd4870951d63fc4374bfc53d36696793b56 WatchSource:0}: Error finding container d2afd0bccf90d81ae2b279c246a03cd4870951d63fc4374bfc53d36696793b56: Status 404 returned error can't find the container with id d2afd0bccf90d81ae2b279c246a03cd4870951d63fc4374bfc53d36696793b56 Oct 14 13:21:59.902292 master-1 kubenswrapper[4740]: I1014 13:21:59.902207 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-cc99494f6-ds5gd"] Oct 14 13:21:59.904659 master-1 kubenswrapper[4740]: I1014 13:21:59.904622 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:21:59.907315 master-1 kubenswrapper[4740]: I1014 13:21:59.907280 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zf6rs" Oct 14 13:21:59.907410 master-1 kubenswrapper[4740]: I1014 13:21:59.907351 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Oct 14 13:21:59.907560 master-1 kubenswrapper[4740]: I1014 13:21:59.907357 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Oct 14 13:21:59.907635 master-1 kubenswrapper[4740]: I1014 13:21:59.907280 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Oct 14 13:21:59.907709 master-1 kubenswrapper[4740]: I1014 13:21:59.907672 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Oct 14 13:21:59.908338 master-1 kubenswrapper[4740]: I1014 13:21:59.908057 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Oct 14 13:21:59.908550 master-1 kubenswrapper[4740]: I1014 13:21:59.908521 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8otna1nr4bh0o" Oct 14 13:21:59.927382 master-1 kubenswrapper[4740]: I1014 13:21:59.927071 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-cc99494f6-ds5gd"] Oct 14 13:22:00.062329 master-1 kubenswrapper[4740]: I1014 13:22:00.062135 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtm5x\" (UniqueName: \"kubernetes.io/projected/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-kube-api-access-dtm5x\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062329 master-1 kubenswrapper[4740]: I1014 13:22:00.062193 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-tls\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062971 master-1 kubenswrapper[4740]: I1014 13:22:00.062501 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-metrics-client-ca\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062971 master-1 kubenswrapper[4740]: I1014 13:22:00.062602 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062971 master-1 kubenswrapper[4740]: I1014 13:22:00.062622 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-grpc-tls\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062971 master-1 kubenswrapper[4740]: I1014 13:22:00.062678 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062971 master-1 kubenswrapper[4740]: I1014 13:22:00.062738 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.062971 master-1 kubenswrapper[4740]: I1014 13:22:00.062782 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.164666 master-1 kubenswrapper[4740]: I1014 13:22:00.164587 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.164666 master-1 kubenswrapper[4740]: I1014 13:22:00.164654 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.165004 master-1 kubenswrapper[4740]: I1014 13:22:00.164695 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtm5x\" (UniqueName: \"kubernetes.io/projected/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-kube-api-access-dtm5x\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.165004 master-1 kubenswrapper[4740]: I1014 13:22:00.164724 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-tls\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.165004 master-1 kubenswrapper[4740]: I1014 13:22:00.164804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-metrics-client-ca\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.165004 master-1 kubenswrapper[4740]: I1014 13:22:00.164845 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-grpc-tls\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.165004 master-1 kubenswrapper[4740]: I1014 13:22:00.164867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.165004 master-1 kubenswrapper[4740]: I1014 13:22:00.164901 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.166180 master-1 kubenswrapper[4740]: I1014 13:22:00.166132 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-metrics-client-ca\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.171291 master-1 kubenswrapper[4740]: I1014 13:22:00.170021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.171291 master-1 kubenswrapper[4740]: I1014 13:22:00.170171 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-grpc-tls\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.171291 master-1 kubenswrapper[4740]: I1014 13:22:00.170241 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.171291 master-1 kubenswrapper[4740]: I1014 13:22:00.169947 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-tls\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.171291 master-1 kubenswrapper[4740]: I1014 13:22:00.170593 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.171291 master-1 kubenswrapper[4740]: I1014 13:22:00.171037 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.184477 master-1 kubenswrapper[4740]: I1014 13:22:00.184435 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtm5x\" (UniqueName: \"kubernetes.io/projected/fa8361b8-f9e0-44d8-9ef1-766c6b0df517-kube-api-access-dtm5x\") pod \"thanos-querier-cc99494f6-ds5gd\" (UID: \"fa8361b8-f9e0-44d8-9ef1-766c6b0df517\") " pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.228949 master-1 kubenswrapper[4740]: I1014 13:22:00.228885 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:00.555008 master-1 kubenswrapper[4740]: I1014 13:22:00.554964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"d2afd0bccf90d81ae2b279c246a03cd4870951d63fc4374bfc53d36696793b56"} Oct 14 13:22:00.556683 master-1 kubenswrapper[4740]: I1014 13:22:00.556655 4740 generic.go:334] "Generic (PLEG): container finished" podID="af20d58b-20fc-4fed-b18b-daf1bdc0665e" containerID="899b0e6f418e894ab72129aca6f432bee218e2343bccf2ed54fee967bb7a2a49" exitCode=0 Oct 14 13:22:00.556755 master-1 kubenswrapper[4740]: I1014 13:22:00.556703 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"af20d58b-20fc-4fed-b18b-daf1bdc0665e","Type":"ContainerDied","Data":"899b0e6f418e894ab72129aca6f432bee218e2343bccf2ed54fee967bb7a2a49"} Oct 14 13:22:00.658660 master-1 kubenswrapper[4740]: I1014 13:22:00.658590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-cc99494f6-ds5gd"] Oct 14 13:22:00.659804 master-1 kubenswrapper[4740]: W1014 13:22:00.659761 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa8361b8_f9e0_44d8_9ef1_766c6b0df517.slice/crio-16ee66dc5f4363bed93e3679b01a3975d6e8d209d2aafbfd98731c6a8f8d39b2 WatchSource:0}: Error finding container 16ee66dc5f4363bed93e3679b01a3975d6e8d209d2aafbfd98731c6a8f8d39b2: Status 404 returned error can't find the container with id 16ee66dc5f4363bed93e3679b01a3975d6e8d209d2aafbfd98731c6a8f8d39b2 Oct 14 13:22:00.955664 master-1 kubenswrapper[4740]: I1014 13:22:00.955527 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39186c2ebd02622803bdbec6984de2a" path="/var/lib/kubelet/pods/e39186c2ebd02622803bdbec6984de2a/volumes" Oct 14 13:22:01.566253 master-1 kubenswrapper[4740]: I1014 13:22:01.566164 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="a84aecc46913d9e8fc0c5cbda4b2f3b75b648a397381adaad0e904bcace46824" exitCode=0 Oct 14 13:22:01.567077 master-1 kubenswrapper[4740]: I1014 13:22:01.566269 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"a84aecc46913d9e8fc0c5cbda4b2f3b75b648a397381adaad0e904bcace46824"} Oct 14 13:22:01.568642 master-1 kubenswrapper[4740]: I1014 13:22:01.568593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"16ee66dc5f4363bed93e3679b01a3975d6e8d209d2aafbfd98731c6a8f8d39b2"} Oct 14 13:22:01.897608 master-1 kubenswrapper[4740]: I1014 13:22:01.897557 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:22:01.989598 master-1 kubenswrapper[4740]: I1014 13:22:01.989525 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kube-api-access\") pod \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " Oct 14 13:22:01.989857 master-1 kubenswrapper[4740]: I1014 13:22:01.989651 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kubelet-dir\") pod \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\" (UID: \"af20d58b-20fc-4fed-b18b-daf1bdc0665e\") " Oct 14 13:22:01.989857 master-1 kubenswrapper[4740]: I1014 13:22:01.989746 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af20d58b-20fc-4fed-b18b-daf1bdc0665e" (UID: "af20d58b-20fc-4fed-b18b-daf1bdc0665e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:22:01.990069 master-1 kubenswrapper[4740]: I1014 13:22:01.990034 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:01.992990 master-1 kubenswrapper[4740]: I1014 13:22:01.992942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af20d58b-20fc-4fed-b18b-daf1bdc0665e" (UID: "af20d58b-20fc-4fed-b18b-daf1bdc0665e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:22:02.091784 master-1 kubenswrapper[4740]: I1014 13:22:02.091716 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af20d58b-20fc-4fed-b18b-daf1bdc0665e-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:02.575695 master-1 kubenswrapper[4740]: I1014 13:22:02.575606 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-1" event={"ID":"af20d58b-20fc-4fed-b18b-daf1bdc0665e","Type":"ContainerDied","Data":"7be171fc0fe65eb379d06886f61135096a668674870b9316bb601825e33c4a7a"} Oct 14 13:22:02.575695 master-1 kubenswrapper[4740]: I1014 13:22:02.575655 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be171fc0fe65eb379d06886f61135096a668674870b9316bb601825e33c4a7a" Oct 14 13:22:02.576789 master-1 kubenswrapper[4740]: I1014 13:22:02.575737 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-1" Oct 14 13:22:03.957134 master-1 kubenswrapper[4740]: I1014 13:22:03.957079 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:22:03.957684 master-1 kubenswrapper[4740]: I1014 13:22:03.957154 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:22:04.638688 master-1 kubenswrapper[4740]: I1014 13:22:04.638634 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:22:04.638688 master-1 kubenswrapper[4740]: I1014 13:22:04.638699 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:22:04.646096 master-1 kubenswrapper[4740]: I1014 13:22:04.646025 4740 patch_prober.go:28] interesting pod/console-668956f9dd-mlrd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Oct 14 13:22:04.646096 master-1 kubenswrapper[4740]: I1014 13:22:04.646086 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-668956f9dd-mlrd8" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Oct 14 13:22:05.410268 master-1 kubenswrapper[4740]: I1014 13:22:05.409314 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 14 13:22:05.410268 master-1 kubenswrapper[4740]: E1014 13:22:05.410135 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af20d58b-20fc-4fed-b18b-daf1bdc0665e" containerName="pruner" Oct 14 13:22:05.410268 master-1 kubenswrapper[4740]: I1014 13:22:05.410149 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="af20d58b-20fc-4fed-b18b-daf1bdc0665e" containerName="pruner" Oct 14 13:22:05.411485 master-1 kubenswrapper[4740]: I1014 13:22:05.410299 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="af20d58b-20fc-4fed-b18b-daf1bdc0665e" containerName="pruner" Oct 14 13:22:05.412445 master-1 kubenswrapper[4740]: I1014 13:22:05.412410 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.415622 master-1 kubenswrapper[4740]: I1014 13:22:05.415587 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8klgi7r2728qp" Oct 14 13:22:05.415907 master-1 kubenswrapper[4740]: I1014 13:22:05.415874 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Oct 14 13:22:05.415984 master-1 kubenswrapper[4740]: I1014 13:22:05.415893 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Oct 14 13:22:05.416057 master-1 kubenswrapper[4740]: I1014 13:22:05.416010 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Oct 14 13:22:05.416123 master-1 kubenswrapper[4740]: I1014 13:22:05.416114 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Oct 14 13:22:05.416288 master-1 kubenswrapper[4740]: I1014 13:22:05.416212 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-dzg65" Oct 14 13:22:05.416598 master-1 kubenswrapper[4740]: I1014 13:22:05.416555 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Oct 14 13:22:05.416886 master-1 kubenswrapper[4740]: I1014 13:22:05.416854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Oct 14 13:22:05.416966 master-1 kubenswrapper[4740]: I1014 13:22:05.416947 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Oct 14 13:22:05.417032 master-1 kubenswrapper[4740]: I1014 13:22:05.416964 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Oct 14 13:22:05.417032 master-1 kubenswrapper[4740]: I1014 13:22:05.416982 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Oct 14 13:22:05.423620 master-1 kubenswrapper[4740]: I1014 13:22:05.423586 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Oct 14 13:22:05.424824 master-1 kubenswrapper[4740]: I1014 13:22:05.424793 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Oct 14 13:22:05.436917 master-1 kubenswrapper[4740]: I1014 13:22:05.436858 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 14 13:22:05.548077 master-1 kubenswrapper[4740]: I1014 13:22:05.548005 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548077 master-1 kubenswrapper[4740]: I1014 13:22:05.548074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548340 master-1 kubenswrapper[4740]: I1014 13:22:05.548108 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548340 master-1 kubenswrapper[4740]: I1014 13:22:05.548135 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548340 master-1 kubenswrapper[4740]: I1014 13:22:05.548163 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsn5\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-kube-api-access-5bsn5\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548440 master-1 kubenswrapper[4740]: I1014 13:22:05.548356 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548542 master-1 kubenswrapper[4740]: I1014 13:22:05.548499 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548608 master-1 kubenswrapper[4740]: I1014 13:22:05.548586 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-config\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548649 master-1 kubenswrapper[4740]: I1014 13:22:05.548616 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-config-out\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548649 master-1 kubenswrapper[4740]: I1014 13:22:05.548635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548714 master-1 kubenswrapper[4740]: I1014 13:22:05.548654 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548747 master-1 kubenswrapper[4740]: I1014 13:22:05.548716 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548889 master-1 kubenswrapper[4740]: I1014 13:22:05.548854 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548930 master-1 kubenswrapper[4740]: I1014 13:22:05.548903 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548960 master-1 kubenswrapper[4740]: I1014 13:22:05.548928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.548960 master-1 kubenswrapper[4740]: I1014 13:22:05.548952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.549018 master-1 kubenswrapper[4740]: I1014 13:22:05.548975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-web-config\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.549072 master-1 kubenswrapper[4740]: I1014 13:22:05.549052 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.607124 master-1 kubenswrapper[4740]: I1014 13:22:05.607009 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"865870b6b49f0cb5a23675fad0cb08752b49e92a717e00ab381a0955ca070aa7"} Oct 14 13:22:05.607124 master-1 kubenswrapper[4740]: I1014 13:22:05.607071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"2d35af07a49e7f21f0ba554ddc9bea2d97b4fcbacd5c0e98974581e6d7435ea4"} Oct 14 13:22:05.607124 master-1 kubenswrapper[4740]: I1014 13:22:05.607088 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"2bab329603dda3b4c9b113215f87430323c4479f0804295ed235b9f0cdcfd9da"} Oct 14 13:22:05.607124 master-1 kubenswrapper[4740]: I1014 13:22:05.607102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"896437c579da16931c104f320f48e66ad3bdacca0402b226cbf829c7415c8533"} Oct 14 13:22:05.607124 master-1 kubenswrapper[4740]: I1014 13:22:05.607114 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"d6fbd521b7e482875c76bbbf31905dd68738819cc22f806fcdfa74994c0357c3"} Oct 14 13:22:05.613780 master-1 kubenswrapper[4740]: I1014 13:22:05.613710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"955d28c4fd4fb11a49a45a4d413d6e19b6956d41b32f6548ecfdbe316c82aa03"} Oct 14 13:22:05.613960 master-1 kubenswrapper[4740]: I1014 13:22:05.613788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"8ff2f91b527a0697ce37c0c337f47f7d3038d49c4e1bd0bc5a35771e500c4c96"} Oct 14 13:22:05.613960 master-1 kubenswrapper[4740]: I1014 13:22:05.613806 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"91148f90b458f055169bfc18389de899429eb9b298b5476d50e57f5b6763494a"} Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.650842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.650936 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.650966 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.650991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.651016 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.651055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bsn5\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-kube-api-access-5bsn5\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.651088 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.651127 master-1 kubenswrapper[4740]: I1014 13:22:05.651127 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651159 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-config\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-config-out\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651204 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651382 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651517 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651680 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-web-config\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.651520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-db\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.653201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.653368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.653603 master-1 kubenswrapper[4740]: I1014 13:22:05.653447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.654741 master-1 kubenswrapper[4740]: I1014 13:22:05.654317 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-config-out\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.654741 master-1 kubenswrapper[4740]: I1014 13:22:05.654359 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-metrics-client-ca\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.656064 master-1 kubenswrapper[4740]: I1014 13:22:05.655673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.656064 master-1 kubenswrapper[4740]: I1014 13:22:05.655754 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.656938 master-1 kubenswrapper[4740]: I1014 13:22:05.656896 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.657084 master-1 kubenswrapper[4740]: I1014 13:22:05.657047 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.657761 master-1 kubenswrapper[4740]: I1014 13:22:05.657729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-tls-assets\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.658880 master-1 kubenswrapper[4740]: I1014 13:22:05.658818 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.658973 master-1 kubenswrapper[4740]: I1014 13:22:05.658963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-web-config\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.659353 master-1 kubenswrapper[4740]: I1014 13:22:05.659313 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.659759 master-1 kubenswrapper[4740]: I1014 13:22:05.659725 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-config\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.660629 master-1 kubenswrapper[4740]: I1014 13:22:05.660579 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-grpc-tls\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.661703 master-1 kubenswrapper[4740]: I1014 13:22:05.661469 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-metrics-client-certs\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.669651 master-1 kubenswrapper[4740]: I1014 13:22:05.669619 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bsn5\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-kube-api-access-5bsn5\") pod \"prometheus-k8s-1\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:05.745894 master-1 kubenswrapper[4740]: I1014 13:22:05.745827 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:06.207461 master-1 kubenswrapper[4740]: I1014 13:22:06.207407 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 14 13:22:06.621914 master-1 kubenswrapper[4740]: I1014 13:22:06.621866 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="3964865eda1440fe224070be2658bbefa239f5e54c4bda527ce7baa007443af6" exitCode=0 Oct 14 13:22:06.622380 master-1 kubenswrapper[4740]: I1014 13:22:06.621921 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"3964865eda1440fe224070be2658bbefa239f5e54c4bda527ce7baa007443af6"} Oct 14 13:22:06.622380 master-1 kubenswrapper[4740]: I1014 13:22:06.621982 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"33962ac369a7e77322dad7b7f85a4a76376c077e39f196dc4a3286462fde03f6"} Oct 14 13:22:06.625598 master-1 kubenswrapper[4740]: I1014 13:22:06.625556 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"12a39e0073eb9e8068bcfb5978c453058b3f1f3ebfb21f85ae9715df30e5fc10"} Oct 14 13:22:06.625598 master-1 kubenswrapper[4740]: I1014 13:22:06.625589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"a51d52579105f88874bd9f2a1334ef79b82734feed4c2359b4010de02bd0450b"} Oct 14 13:22:06.630653 master-1 kubenswrapper[4740]: I1014 13:22:06.630617 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerStarted","Data":"81bb6172d8fac973a83106863ff1970861e0f56fc47a0169b6bfb8b4e383deb0"} Oct 14 13:22:07.654543 master-1 kubenswrapper[4740]: I1014 13:22:07.654469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" event={"ID":"fa8361b8-f9e0-44d8-9ef1-766c6b0df517","Type":"ContainerStarted","Data":"e6cd62496abe93521dd65841ac3055aa9dda5d759fe03b28493321ad30def316"} Oct 14 13:22:07.697410 master-1 kubenswrapper[4740]: I1014 13:22:07.697311 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-1" podStartSLOduration=3.060267294 podStartE2EDuration="9.697289267s" podCreationTimestamp="2025-10-14 13:21:58 +0000 UTC" firstStartedPulling="2025-10-14 13:21:59.702641063 +0000 UTC m=+945.512930392" lastFinishedPulling="2025-10-14 13:22:06.339662996 +0000 UTC m=+952.149952365" observedRunningTime="2025-10-14 13:22:06.717903328 +0000 UTC m=+952.528192687" watchObservedRunningTime="2025-10-14 13:22:07.697289267 +0000 UTC m=+953.507578586" Oct 14 13:22:08.662668 master-1 kubenswrapper[4740]: I1014 13:22:08.662557 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:08.956421 master-1 kubenswrapper[4740]: I1014 13:22:08.956309 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:22:08.956421 master-1 kubenswrapper[4740]: I1014 13:22:08.956358 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:22:09.212912 master-1 kubenswrapper[4740]: I1014 13:22:09.212757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:22:09.677901 master-1 kubenswrapper[4740]: I1014 13:22:09.677830 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" Oct 14 13:22:09.719858 master-1 kubenswrapper[4740]: I1014 13:22:09.719789 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-cc99494f6-ds5gd" podStartSLOduration=5.039747484 podStartE2EDuration="10.719713711s" podCreationTimestamp="2025-10-14 13:21:59 +0000 UTC" firstStartedPulling="2025-10-14 13:22:00.662753369 +0000 UTC m=+946.473042698" lastFinishedPulling="2025-10-14 13:22:06.342719586 +0000 UTC m=+952.153008925" observedRunningTime="2025-10-14 13:22:07.696839225 +0000 UTC m=+953.507128554" watchObservedRunningTime="2025-10-14 13:22:09.719713711 +0000 UTC m=+955.530003050" Oct 14 13:22:10.942950 master-1 kubenswrapper[4740]: I1014 13:22:10.942839 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:10.957269 master-1 kubenswrapper[4740]: I1014 13:22:10.957180 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="330e3da0-0baa-45c8-965d-f1a1b7d0d799" Oct 14 13:22:10.957269 master-1 kubenswrapper[4740]: I1014 13:22:10.957250 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="330e3da0-0baa-45c8-965d-f1a1b7d0d799" Oct 14 13:22:10.978094 master-1 kubenswrapper[4740]: I1014 13:22:10.978030 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:22:10.993680 master-1 kubenswrapper[4740]: I1014 13:22:10.990698 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:10.996250 master-1 kubenswrapper[4740]: I1014 13:22:10.996166 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:22:11.025790 master-1 kubenswrapper[4740]: I1014 13:22:11.020148 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:11.025790 master-1 kubenswrapper[4740]: I1014 13:22:11.024135 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:22:13.957070 master-1 kubenswrapper[4740]: I1014 13:22:13.956784 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:22:13.957070 master-1 kubenswrapper[4740]: I1014 13:22:13.956836 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:22:14.638530 master-1 kubenswrapper[4740]: I1014 13:22:14.638431 4740 patch_prober.go:28] interesting pod/console-668956f9dd-mlrd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Oct 14 13:22:14.638806 master-1 kubenswrapper[4740]: I1014 13:22:14.638558 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-668956f9dd-mlrd8" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Oct 14 13:22:18.956922 master-1 kubenswrapper[4740]: I1014 13:22:18.956857 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:22:18.957457 master-1 kubenswrapper[4740]: I1014 13:22:18.956927 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:22:20.752080 master-1 kubenswrapper[4740]: I1014 13:22:20.752042 4740 generic.go:334] "Generic (PLEG): container finished" podID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerID="e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66" exitCode=0 Oct 14 13:22:20.752080 master-1 kubenswrapper[4740]: I1014 13:22:20.752084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerDied","Data":"e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66"} Oct 14 13:22:20.752931 master-1 kubenswrapper[4740]: I1014 13:22:20.752118 4740 scope.go:117] "RemoveContainer" containerID="61e2daca2897fcccbe37061c0f5b0d2fe210930fbd45f1ce31fa38a3f52c60ff" Oct 14 13:22:20.752931 master-1 kubenswrapper[4740]: I1014 13:22:20.752761 4740 scope.go:117] "RemoveContainer" containerID="e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66" Oct 14 13:22:20.753048 master-1 kubenswrapper[4740]: E1014 13:22:20.753016 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-77674cffc8-k5fvv_openshift-route-controller-manager(e4c8f12e-4b62-49eb-a466-af75a571c62f)\"" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" Oct 14 13:22:21.705266 master-1 kubenswrapper[4740]: W1014 13:22:21.705166 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d61efaa0f96869cf2939026aad6022.slice/crio-d1f11d5a46c0b7567fdef9dc67ffcff20357092f86b14ee94f5d80cb11146d37 WatchSource:0}: Error finding container d1f11d5a46c0b7567fdef9dc67ffcff20357092f86b14ee94f5d80cb11146d37: Status 404 returned error can't find the container with id d1f11d5a46c0b7567fdef9dc67ffcff20357092f86b14ee94f5d80cb11146d37 Oct 14 13:22:21.759779 master-1 kubenswrapper[4740]: I1014 13:22:21.759727 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7b6b7bb859-m8s2b_819cb927-5174-4df8-a723-cc07e53d9044/multus-admission-controller/0.log" Oct 14 13:22:21.760766 master-1 kubenswrapper[4740]: I1014 13:22:21.759792 4740 generic.go:334] "Generic (PLEG): container finished" podID="819cb927-5174-4df8-a723-cc07e53d9044" containerID="fe0e49ced70217b96835378cb2e4d66dc3f26f4f71857ad6f8c660fb548cbfcb" exitCode=137 Oct 14 13:22:21.760766 master-1 kubenswrapper[4740]: I1014 13:22:21.759902 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" event={"ID":"819cb927-5174-4df8-a723-cc07e53d9044","Type":"ContainerDied","Data":"fe0e49ced70217b96835378cb2e4d66dc3f26f4f71857ad6f8c660fb548cbfcb"} Oct 14 13:22:21.762163 master-1 kubenswrapper[4740]: I1014 13:22:21.762126 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"d1f11d5a46c0b7567fdef9dc67ffcff20357092f86b14ee94f5d80cb11146d37"} Oct 14 13:22:21.981338 master-1 kubenswrapper[4740]: I1014 13:22:21.981153 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7b6b7bb859-m8s2b_819cb927-5174-4df8-a723-cc07e53d9044/multus-admission-controller/0.log" Oct 14 13:22:21.981338 master-1 kubenswrapper[4740]: I1014 13:22:21.981257 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:22:22.118254 master-1 kubenswrapper[4740]: I1014 13:22:22.116012 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/819cb927-5174-4df8-a723-cc07e53d9044-webhook-certs\") pod \"819cb927-5174-4df8-a723-cc07e53d9044\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " Oct 14 13:22:22.118254 master-1 kubenswrapper[4740]: I1014 13:22:22.116189 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qtzm\" (UniqueName: \"kubernetes.io/projected/819cb927-5174-4df8-a723-cc07e53d9044-kube-api-access-9qtzm\") pod \"819cb927-5174-4df8-a723-cc07e53d9044\" (UID: \"819cb927-5174-4df8-a723-cc07e53d9044\") " Oct 14 13:22:22.122248 master-1 kubenswrapper[4740]: I1014 13:22:22.120782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819cb927-5174-4df8-a723-cc07e53d9044-kube-api-access-9qtzm" (OuterVolumeSpecName: "kube-api-access-9qtzm") pod "819cb927-5174-4df8-a723-cc07e53d9044" (UID: "819cb927-5174-4df8-a723-cc07e53d9044"). InnerVolumeSpecName "kube-api-access-9qtzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:22:22.122248 master-1 kubenswrapper[4740]: I1014 13:22:22.120956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819cb927-5174-4df8-a723-cc07e53d9044-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "819cb927-5174-4df8-a723-cc07e53d9044" (UID: "819cb927-5174-4df8-a723-cc07e53d9044"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:22:22.218575 master-1 kubenswrapper[4740]: I1014 13:22:22.218519 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/819cb927-5174-4df8-a723-cc07e53d9044-webhook-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:22.218775 master-1 kubenswrapper[4740]: I1014 13:22:22.218581 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qtzm\" (UniqueName: \"kubernetes.io/projected/819cb927-5174-4df8-a723-cc07e53d9044-kube-api-access-9qtzm\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:22.773669 master-1 kubenswrapper[4740]: I1014 13:22:22.773593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"5ac1218809d0fc572cfec08d0c990ed62a777d84382fd79cdbb8e11b45766b3d"} Oct 14 13:22:22.773669 master-1 kubenswrapper[4740]: I1014 13:22:22.773660 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"f1ea437af65c58aa9a7defa07101efbb33a229bc2ca4bbc295be92bcd032e893"} Oct 14 13:22:22.773669 master-1 kubenswrapper[4740]: I1014 13:22:22.773680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"bf6d32c0ab07062e4cf2faa0fb3f11b49404272e70cf25e281d742b6cc15fdbe"} Oct 14 13:22:22.774567 master-1 kubenswrapper[4740]: I1014 13:22:22.773699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"a836f0f0d731ba4ebc1d5f5e51a85585abeecbda30cc3a088b3ec77311ff5bed"} Oct 14 13:22:22.774567 master-1 kubenswrapper[4740]: I1014 13:22:22.773719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"b61c1ab1ec698919e1b5cef271aec9037b0600ce60d4916637ddb3a39c701d95"} Oct 14 13:22:22.774567 master-1 kubenswrapper[4740]: I1014 13:22:22.773737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerStarted","Data":"38eaa2b002f57fd158787266306bcacdb5e72b8d03c630b6fdb586b70cd5b78c"} Oct 14 13:22:22.775585 master-1 kubenswrapper[4740]: I1014 13:22:22.775554 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7b6b7bb859-m8s2b_819cb927-5174-4df8-a723-cc07e53d9044/multus-admission-controller/0.log" Oct 14 13:22:22.775700 master-1 kubenswrapper[4740]: I1014 13:22:22.775676 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" event={"ID":"819cb927-5174-4df8-a723-cc07e53d9044","Type":"ContainerDied","Data":"cb9adbe57acf28baaf717de9066dd03ed15d95d96d4942466a3cd1dc6a3a0411"} Oct 14 13:22:22.775700 master-1 kubenswrapper[4740]: I1014 13:22:22.775702 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b" Oct 14 13:22:22.775853 master-1 kubenswrapper[4740]: I1014 13:22:22.775717 4740 scope.go:117] "RemoveContainer" containerID="440e19c3852cce8cff9d2a27938ed42d68f52d44868ed579ebaf8cd8b1e09955" Oct 14 13:22:22.778343 master-1 kubenswrapper[4740]: I1014 13:22:22.777852 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65bb9777fc-bm4pw" event={"ID":"a32f08cc-7db7-455b-b904-e74aef3a165a","Type":"ContainerStarted","Data":"00d3ba5cf7933543b943c16170b49423ffb090eb724e4f8103d5c9022fc00996"} Oct 14 13:22:22.778343 master-1 kubenswrapper[4740]: I1014 13:22:22.778109 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:22:22.779944 master-1 kubenswrapper[4740]: I1014 13:22:22.779889 4740 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f" exitCode=0 Oct 14 13:22:22.779944 master-1 kubenswrapper[4740]: I1014 13:22:22.779941 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerDied","Data":"82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f"} Oct 14 13:22:22.780884 master-1 kubenswrapper[4740]: I1014 13:22:22.780745 4740 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bm4pw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.85:8080/\": dial tcp 10.128.0.85:8080: connect: connection refused" start-of-body= Oct 14 13:22:22.780884 master-1 kubenswrapper[4740]: I1014 13:22:22.780826 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bm4pw" podUID="a32f08cc-7db7-455b-b904-e74aef3a165a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.85:8080/\": dial tcp 10.128.0.85:8080: connect: connection refused" Oct 14 13:22:22.811034 master-1 kubenswrapper[4740]: I1014 13:22:22.810977 4740 scope.go:117] "RemoveContainer" containerID="fe0e49ced70217b96835378cb2e4d66dc3f26f4f71857ad6f8c660fb548cbfcb" Oct 14 13:22:22.824731 master-1 kubenswrapper[4740]: I1014 13:22:22.824266 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-1" podStartSLOduration=2.7519145099999998 podStartE2EDuration="17.824205854s" podCreationTimestamp="2025-10-14 13:22:05 +0000 UTC" firstStartedPulling="2025-10-14 13:22:06.627857433 +0000 UTC m=+952.438146752" lastFinishedPulling="2025-10-14 13:22:21.700148757 +0000 UTC m=+967.510438096" observedRunningTime="2025-10-14 13:22:22.816363409 +0000 UTC m=+968.626652808" watchObservedRunningTime="2025-10-14 13:22:22.824205854 +0000 UTC m=+968.634495213" Oct 14 13:22:22.846093 master-1 kubenswrapper[4740]: I1014 13:22:22.846037 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b"] Oct 14 13:22:22.853795 master-1 kubenswrapper[4740]: I1014 13:22:22.853698 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b"] Oct 14 13:22:22.899186 master-1 kubenswrapper[4740]: I1014 13:22:22.899100 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-65bb9777fc-bm4pw" podStartSLOduration=2.261183512 podStartE2EDuration="31.899082644s" podCreationTimestamp="2025-10-14 13:21:51 +0000 UTC" firstStartedPulling="2025-10-14 13:21:52.268133227 +0000 UTC m=+938.078422576" lastFinishedPulling="2025-10-14 13:22:21.906032379 +0000 UTC m=+967.716321708" observedRunningTime="2025-10-14 13:22:22.896882367 +0000 UTC m=+968.707171716" watchObservedRunningTime="2025-10-14 13:22:22.899082644 +0000 UTC m=+968.709371983" Oct 14 13:22:22.960298 master-1 kubenswrapper[4740]: I1014 13:22:22.960125 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="819cb927-5174-4df8-a723-cc07e53d9044" path="/var/lib/kubelet/pods/819cb927-5174-4df8-a723-cc07e53d9044/volumes" Oct 14 13:22:23.793862 master-1 kubenswrapper[4740]: I1014 13:22:23.793805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570"} Oct 14 13:22:23.793862 master-1 kubenswrapper[4740]: I1014 13:22:23.793858 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d"} Oct 14 13:22:23.794411 master-1 kubenswrapper[4740]: I1014 13:22:23.793870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346"} Oct 14 13:22:23.800810 master-1 kubenswrapper[4740]: I1014 13:22:23.800764 4740 patch_prober.go:28] interesting pod/downloads-65bb9777fc-bm4pw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.85:8080/\": dial tcp 10.128.0.85:8080: connect: connection refused" start-of-body= Oct 14 13:22:23.800991 master-1 kubenswrapper[4740]: I1014 13:22:23.800824 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65bb9777fc-bm4pw" podUID="a32f08cc-7db7-455b-b904-e74aef3a165a" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.85:8080/\": dial tcp 10.128.0.85:8080: connect: connection refused" Oct 14 13:22:24.579711 master-1 kubenswrapper[4740]: I1014 13:22:24.579654 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/readyz\"","reason":"Forbidden","details":{},"code":403} Oct 14 13:22:24.579711 master-1 kubenswrapper[4740]: I1014 13:22:24.579710 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 13:22:24.638846 master-1 kubenswrapper[4740]: I1014 13:22:24.638734 4740 patch_prober.go:28] interesting pod/console-668956f9dd-mlrd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Oct 14 13:22:24.638846 master-1 kubenswrapper[4740]: I1014 13:22:24.638795 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-668956f9dd-mlrd8" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Oct 14 13:22:24.807353 master-1 kubenswrapper[4740]: I1014 13:22:24.807297 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179"} Oct 14 13:22:24.807353 master-1 kubenswrapper[4740]: I1014 13:22:24.807338 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"42d61efaa0f96869cf2939026aad6022","Type":"ContainerStarted","Data":"359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958"} Oct 14 13:22:24.807873 master-1 kubenswrapper[4740]: I1014 13:22:24.807496 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:24.859811 master-1 kubenswrapper[4740]: I1014 13:22:24.859640 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:22:24.859811 master-1 kubenswrapper[4740]: I1014 13:22:24.859704 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:22:24.860164 master-1 kubenswrapper[4740]: I1014 13:22:24.860140 4740 scope.go:117] "RemoveContainer" containerID="e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66" Oct 14 13:22:24.860437 master-1 kubenswrapper[4740]: E1014 13:22:24.860388 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-77674cffc8-k5fvv_openshift-route-controller-manager(e4c8f12e-4b62-49eb-a466-af75a571c62f)\"" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" Oct 14 13:22:25.746970 master-1 kubenswrapper[4740]: I1014 13:22:25.746902 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:22:25.876055 master-1 kubenswrapper[4740]: I1014 13:22:25.875954 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=14.875938857 podStartE2EDuration="14.875938857s" podCreationTimestamp="2025-10-14 13:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:22:25.869817648 +0000 UTC m=+971.680106977" watchObservedRunningTime="2025-10-14 13:22:25.875938857 +0000 UTC m=+971.686228186" Oct 14 13:22:26.021339 master-1 kubenswrapper[4740]: I1014 13:22:26.021138 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:26.021339 master-1 kubenswrapper[4740]: I1014 13:22:26.021192 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: I1014 13:22:26.476299 4740 patch_prober.go:28] interesting pod/kube-apiserver-master-1 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]etcd ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: livez check failed Oct 14 13:22:26.476502 master-1 kubenswrapper[4740]: I1014 13:22:26.476437 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: I1014 13:22:28.949412 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: [+]shutdown ok Oct 14 13:22:28.949529 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:22:28.952642 master-1 kubenswrapper[4740]: I1014 13:22:28.949539 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: I1014 13:22:28.962320 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: [+]shutdown ok Oct 14 13:22:28.962390 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:22:28.965204 master-1 kubenswrapper[4740]: I1014 13:22:28.964380 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:29.246362 master-1 kubenswrapper[4740]: I1014 13:22:29.246299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: I1014 13:22:31.401538 4740 patch_prober.go:28] interesting pod/kube-apiserver-master-1 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]etcd ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: livez check failed Oct 14 13:22:31.401896 master-1 kubenswrapper[4740]: I1014 13:22:31.401825 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:31.815707 master-1 kubenswrapper[4740]: I1014 13:22:31.815633 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65bb9777fc-bm4pw" Oct 14 13:22:31.852382 master-1 kubenswrapper[4740]: I1014 13:22:31.852293 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-8475fbcb68-p4n8s"] Oct 14 13:22:31.852701 master-1 kubenswrapper[4740]: I1014 13:22:31.852560 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" containerID="cri-o://ca6fc295da9f3231ac56c683e895278718ac1b23a52cca0c02cbe23b7495fbcc" gracePeriod=170 Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: I1014 13:22:33.965386 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: [+]shutdown ok Oct 14 13:22:33.965494 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:22:33.968784 master-1 kubenswrapper[4740]: I1014 13:22:33.965497 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:34.640306 master-1 kubenswrapper[4740]: I1014 13:22:34.640147 4740 patch_prober.go:28] interesting pod/console-668956f9dd-mlrd8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Oct 14 13:22:34.640306 master-1 kubenswrapper[4740]: I1014 13:22:34.640272 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-668956f9dd-mlrd8" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Oct 14 13:22:35.944612 master-1 kubenswrapper[4740]: I1014 13:22:35.944516 4740 scope.go:117] "RemoveContainer" containerID="e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66" Oct 14 13:22:35.945184 master-1 kubenswrapper[4740]: E1014 13:22:35.945038 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-77674cffc8-k5fvv_openshift-route-controller-manager(e4c8f12e-4b62-49eb-a466-af75a571c62f)\"" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: I1014 13:22:36.554969 4740 patch_prober.go:28] interesting pod/kube-apiserver-master-1 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]etcd ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:36.555112 master-1 kubenswrapper[4740]: livez check failed Oct 14 13:22:36.557414 master-1 kubenswrapper[4740]: I1014 13:22:36.557308 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: I1014 13:22:38.962062 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: [+]shutdown ok Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:22:38.962183 master-1 kubenswrapper[4740]: I1014 13:22:38.962160 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:41.026607 master-1 kubenswrapper[4740]: I1014 13:22:41.026547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: I1014 13:22:41.168446 4740 patch_prober.go:28] interesting pod/kube-apiserver-master-1 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]etcd ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:22:41.168557 master-1 kubenswrapper[4740]: livez check failed Oct 14 13:22:41.170153 master-1 kubenswrapper[4740]: I1014 13:22:41.168632 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:41.927672 master-1 kubenswrapper[4740]: I1014 13:22:41.927597 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56cfb99cfd-9798f"] Oct 14 13:22:41.927922 master-1 kubenswrapper[4740]: I1014 13:22:41.927828 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" podUID="95ae2a7e-b760-4dc0-8b0e-adb39439db3f" containerName="controller-manager" containerID="cri-o://7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91" gracePeriod=30 Oct 14 13:22:41.985858 master-1 kubenswrapper[4740]: I1014 13:22:41.985039 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv"] Oct 14 13:22:42.336003 master-1 kubenswrapper[4740]: I1014 13:22:42.335742 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:22:42.373629 master-1 kubenswrapper[4740]: I1014 13:22:42.372606 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c8f12e-4b62-49eb-a466-af75a571c62f-serving-cert\") pod \"e4c8f12e-4b62-49eb-a466-af75a571c62f\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " Oct 14 13:22:42.373629 master-1 kubenswrapper[4740]: I1014 13:22:42.372719 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-client-ca\") pod \"e4c8f12e-4b62-49eb-a466-af75a571c62f\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " Oct 14 13:22:42.373629 master-1 kubenswrapper[4740]: I1014 13:22:42.372817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-config\") pod \"e4c8f12e-4b62-49eb-a466-af75a571c62f\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " Oct 14 13:22:42.373629 master-1 kubenswrapper[4740]: I1014 13:22:42.372905 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skwd\" (UniqueName: \"kubernetes.io/projected/e4c8f12e-4b62-49eb-a466-af75a571c62f-kube-api-access-4skwd\") pod \"e4c8f12e-4b62-49eb-a466-af75a571c62f\" (UID: \"e4c8f12e-4b62-49eb-a466-af75a571c62f\") " Oct 14 13:22:42.374115 master-1 kubenswrapper[4740]: I1014 13:22:42.373897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-config" (OuterVolumeSpecName: "config") pod "e4c8f12e-4b62-49eb-a466-af75a571c62f" (UID: "e4c8f12e-4b62-49eb-a466-af75a571c62f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:22:42.374399 master-1 kubenswrapper[4740]: I1014 13:22:42.374225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-client-ca" (OuterVolumeSpecName: "client-ca") pod "e4c8f12e-4b62-49eb-a466-af75a571c62f" (UID: "e4c8f12e-4b62-49eb-a466-af75a571c62f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:22:42.377797 master-1 kubenswrapper[4740]: I1014 13:22:42.377266 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c8f12e-4b62-49eb-a466-af75a571c62f-kube-api-access-4skwd" (OuterVolumeSpecName: "kube-api-access-4skwd") pod "e4c8f12e-4b62-49eb-a466-af75a571c62f" (UID: "e4c8f12e-4b62-49eb-a466-af75a571c62f"). InnerVolumeSpecName "kube-api-access-4skwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:22:42.378282 master-1 kubenswrapper[4740]: I1014 13:22:42.378190 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c8f12e-4b62-49eb-a466-af75a571c62f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e4c8f12e-4b62-49eb-a466-af75a571c62f" (UID: "e4c8f12e-4b62-49eb-a466-af75a571c62f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:22:42.421781 master-1 kubenswrapper[4740]: I1014 13:22:42.421736 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:22:42.474984 master-1 kubenswrapper[4740]: I1014 13:22:42.474893 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jzj6\" (UniqueName: \"kubernetes.io/projected/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-kube-api-access-4jzj6\") pod \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " Oct 14 13:22:42.475259 master-1 kubenswrapper[4740]: I1014 13:22:42.475012 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-proxy-ca-bundles\") pod \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " Oct 14 13:22:42.475259 master-1 kubenswrapper[4740]: I1014 13:22:42.475079 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-config\") pod \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " Oct 14 13:22:42.475259 master-1 kubenswrapper[4740]: I1014 13:22:42.475131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-serving-cert\") pod \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " Oct 14 13:22:42.475365 master-1 kubenswrapper[4740]: I1014 13:22:42.475273 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-client-ca\") pod \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\" (UID: \"95ae2a7e-b760-4dc0-8b0e-adb39439db3f\") " Oct 14 13:22:42.476038 master-1 kubenswrapper[4740]: I1014 13:22:42.476005 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "95ae2a7e-b760-4dc0-8b0e-adb39439db3f" (UID: "95ae2a7e-b760-4dc0-8b0e-adb39439db3f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:22:42.476211 master-1 kubenswrapper[4740]: I1014 13:22:42.476173 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-config" (OuterVolumeSpecName: "config") pod "95ae2a7e-b760-4dc0-8b0e-adb39439db3f" (UID: "95ae2a7e-b760-4dc0-8b0e-adb39439db3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:22:42.476308 master-1 kubenswrapper[4740]: I1014 13:22:42.476270 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-client-ca" (OuterVolumeSpecName: "client-ca") pod "95ae2a7e-b760-4dc0-8b0e-adb39439db3f" (UID: "95ae2a7e-b760-4dc0-8b0e-adb39439db3f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:22:42.476395 master-1 kubenswrapper[4740]: I1014 13:22:42.476363 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c8f12e-4b62-49eb-a466-af75a571c62f-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.476395 master-1 kubenswrapper[4740]: I1014 13:22:42.476388 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.476460 master-1 kubenswrapper[4740]: I1014 13:22:42.476399 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-proxy-ca-bundles\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.476460 master-1 kubenswrapper[4740]: I1014 13:22:42.476410 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4c8f12e-4b62-49eb-a466-af75a571c62f-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.476460 master-1 kubenswrapper[4740]: I1014 13:22:42.476418 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.476460 master-1 kubenswrapper[4740]: I1014 13:22:42.476427 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4skwd\" (UniqueName: \"kubernetes.io/projected/e4c8f12e-4b62-49eb-a466-af75a571c62f-kube-api-access-4skwd\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.476460 master-1 kubenswrapper[4740]: I1014 13:22:42.476436 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.479246 master-1 kubenswrapper[4740]: I1014 13:22:42.479141 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95ae2a7e-b760-4dc0-8b0e-adb39439db3f" (UID: "95ae2a7e-b760-4dc0-8b0e-adb39439db3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:22:42.479870 master-1 kubenswrapper[4740]: I1014 13:22:42.479850 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-kube-api-access-4jzj6" (OuterVolumeSpecName: "kube-api-access-4jzj6") pod "95ae2a7e-b760-4dc0-8b0e-adb39439db3f" (UID: "95ae2a7e-b760-4dc0-8b0e-adb39439db3f"). InnerVolumeSpecName "kube-api-access-4jzj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:22:42.578574 master-1 kubenswrapper[4740]: I1014 13:22:42.578402 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.578574 master-1 kubenswrapper[4740]: I1014 13:22:42.578453 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jzj6\" (UniqueName: \"kubernetes.io/projected/95ae2a7e-b760-4dc0-8b0e-adb39439db3f-kube-api-access-4jzj6\") on node \"master-1\" DevicePath \"\"" Oct 14 13:22:42.975408 master-1 kubenswrapper[4740]: I1014 13:22:42.975138 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" event={"ID":"e4c8f12e-4b62-49eb-a466-af75a571c62f","Type":"ContainerDied","Data":"d97eb34a8632f0701dd952586765db3961305b34f75564be0070e3773d6d0ebe"} Oct 14 13:22:42.975408 master-1 kubenswrapper[4740]: I1014 13:22:42.975278 4740 scope.go:117] "RemoveContainer" containerID="e645c51431c02e45ea744727452686571dd3fa84b28317ebe10c73ac34dfab66" Oct 14 13:22:42.975408 master-1 kubenswrapper[4740]: I1014 13:22:42.975402 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv" Oct 14 13:22:42.979936 master-1 kubenswrapper[4740]: I1014 13:22:42.979851 4740 generic.go:334] "Generic (PLEG): container finished" podID="95ae2a7e-b760-4dc0-8b0e-adb39439db3f" containerID="7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91" exitCode=0 Oct 14 13:22:42.979936 master-1 kubenswrapper[4740]: I1014 13:22:42.979930 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" event={"ID":"95ae2a7e-b760-4dc0-8b0e-adb39439db3f","Type":"ContainerDied","Data":"7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91"} Oct 14 13:22:42.980263 master-1 kubenswrapper[4740]: I1014 13:22:42.979981 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" event={"ID":"95ae2a7e-b760-4dc0-8b0e-adb39439db3f","Type":"ContainerDied","Data":"2ee1320fddad365b7df09b4f4ca57138aaa99fa2f79fb6cec87285ae6b280ee5"} Oct 14 13:22:42.980263 master-1 kubenswrapper[4740]: I1014 13:22:42.980071 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cfb99cfd-9798f" Oct 14 13:22:43.004807 master-1 kubenswrapper[4740]: I1014 13:22:43.004746 4740 scope.go:117] "RemoveContainer" containerID="7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91" Oct 14 13:22:43.024709 master-1 kubenswrapper[4740]: I1014 13:22:43.024654 4740 scope.go:117] "RemoveContainer" containerID="7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91" Oct 14 13:22:43.025391 master-1 kubenswrapper[4740]: E1014 13:22:43.025311 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91\": container with ID starting with 7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91 not found: ID does not exist" containerID="7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91" Oct 14 13:22:43.025508 master-1 kubenswrapper[4740]: I1014 13:22:43.025398 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91"} err="failed to get container status \"7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91\": rpc error: code = NotFound desc = could not find container \"7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91\": container with ID starting with 7b66e8c12af6728fa588073f6c1557696d99ef266dc772855730b9cfbbe93e91 not found: ID does not exist" Oct 14 13:22:43.082853 master-1 kubenswrapper[4740]: I1014 13:22:43.082767 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv"] Oct 14 13:22:43.173475 master-1 kubenswrapper[4740]: I1014 13:22:43.173304 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv"] Oct 14 13:22:43.272163 master-1 kubenswrapper[4740]: I1014 13:22:43.272054 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56cfb99cfd-9798f"] Oct 14 13:22:43.313753 master-1 kubenswrapper[4740]: I1014 13:22:43.313660 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56cfb99cfd-9798f"] Oct 14 13:22:43.371751 master-1 kubenswrapper[4740]: I1014 13:22:43.371655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn"] Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: E1014 13:22:43.372104 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372128 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: E1014 13:22:43.372143 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372157 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: E1014 13:22:43.372190 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="kube-rbac-proxy" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372205 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="kube-rbac-proxy" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: E1014 13:22:43.372267 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ae2a7e-b760-4dc0-8b0e-adb39439db3f" containerName="controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372283 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ae2a7e-b760-4dc0-8b0e-adb39439db3f" containerName="controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: E1014 13:22:43.372317 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="multus-admission-controller" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372337 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="multus-admission-controller" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372566 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372589 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372608 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="kube-rbac-proxy" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372643 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ae2a7e-b760-4dc0-8b0e-adb39439db3f" containerName="controller-manager" Oct 14 13:22:43.372733 master-1 kubenswrapper[4740]: I1014 13:22:43.372675 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="819cb927-5174-4df8-a723-cc07e53d9044" containerName="multus-admission-controller" Oct 14 13:22:43.373817 master-1 kubenswrapper[4740]: I1014 13:22:43.373624 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.380419 master-1 kubenswrapper[4740]: I1014 13:22:43.380352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-t6l59" Oct 14 13:22:43.380419 master-1 kubenswrapper[4740]: I1014 13:22:43.380360 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 13:22:43.380947 master-1 kubenswrapper[4740]: I1014 13:22:43.380864 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 13:22:43.380947 master-1 kubenswrapper[4740]: I1014 13:22:43.380900 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 13:22:43.381225 master-1 kubenswrapper[4740]: I1014 13:22:43.381182 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 13:22:43.381445 master-1 kubenswrapper[4740]: I1014 13:22:43.381409 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 13:22:43.388828 master-1 kubenswrapper[4740]: I1014 13:22:43.388747 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5"] Oct 14 13:22:43.389221 master-1 kubenswrapper[4740]: E1014 13:22:43.389170 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.389221 master-1 kubenswrapper[4740]: I1014 13:22:43.389195 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.389461 master-1 kubenswrapper[4740]: I1014 13:22:43.389392 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" containerName="route-controller-manager" Oct 14 13:22:43.390276 master-1 kubenswrapper[4740]: I1014 13:22:43.390187 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.391873 master-1 kubenswrapper[4740]: I1014 13:22:43.391794 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z72d\" (UniqueName: \"kubernetes.io/projected/8831d469-1dd6-492a-81e7-41fe30dbb6e3-kube-api-access-5z72d\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.392051 master-1 kubenswrapper[4740]: I1014 13:22:43.391884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8831d469-1dd6-492a-81e7-41fe30dbb6e3-client-ca\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.392051 master-1 kubenswrapper[4740]: I1014 13:22:43.392046 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8831d469-1dd6-492a-81e7-41fe30dbb6e3-serving-cert\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.392336 master-1 kubenswrapper[4740]: I1014 13:22:43.392116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8831d469-1dd6-492a-81e7-41fe30dbb6e3-config\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.397832 master-1 kubenswrapper[4740]: I1014 13:22:43.395751 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 13:22:43.397832 master-1 kubenswrapper[4740]: I1014 13:22:43.395947 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 13:22:43.397832 master-1 kubenswrapper[4740]: I1014 13:22:43.396196 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:22:43.397832 master-1 kubenswrapper[4740]: I1014 13:22:43.396804 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-2zbrt" Oct 14 13:22:43.397832 master-1 kubenswrapper[4740]: I1014 13:22:43.397210 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 13:22:43.397832 master-1 kubenswrapper[4740]: I1014 13:22:43.397615 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 13:22:43.402439 master-1 kubenswrapper[4740]: I1014 13:22:43.402369 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 13:22:43.409075 master-1 kubenswrapper[4740]: I1014 13:22:43.408627 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn"] Oct 14 13:22:43.419181 master-1 kubenswrapper[4740]: I1014 13:22:43.419068 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5"] Oct 14 13:22:43.494425 master-1 kubenswrapper[4740]: I1014 13:22:43.494319 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8831d469-1dd6-492a-81e7-41fe30dbb6e3-client-ca\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.494741 master-1 kubenswrapper[4740]: I1014 13:22:43.494436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-serving-cert\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.494741 master-1 kubenswrapper[4740]: I1014 13:22:43.494507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpz6r\" (UniqueName: \"kubernetes.io/projected/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-kube-api-access-hpz6r\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.494741 master-1 kubenswrapper[4740]: I1014 13:22:43.494704 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8831d469-1dd6-492a-81e7-41fe30dbb6e3-serving-cert\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.495016 master-1 kubenswrapper[4740]: I1014 13:22:43.494833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8831d469-1dd6-492a-81e7-41fe30dbb6e3-config\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.495016 master-1 kubenswrapper[4740]: I1014 13:22:43.494870 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-proxy-ca-bundles\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.495016 master-1 kubenswrapper[4740]: I1014 13:22:43.494901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-client-ca\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.495016 master-1 kubenswrapper[4740]: I1014 13:22:43.494946 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-config\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.495450 master-1 kubenswrapper[4740]: I1014 13:22:43.495089 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z72d\" (UniqueName: \"kubernetes.io/projected/8831d469-1dd6-492a-81e7-41fe30dbb6e3-kube-api-access-5z72d\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.495616 master-1 kubenswrapper[4740]: I1014 13:22:43.495554 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8831d469-1dd6-492a-81e7-41fe30dbb6e3-client-ca\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.496381 master-1 kubenswrapper[4740]: I1014 13:22:43.496345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8831d469-1dd6-492a-81e7-41fe30dbb6e3-config\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.498941 master-1 kubenswrapper[4740]: I1014 13:22:43.498886 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8831d469-1dd6-492a-81e7-41fe30dbb6e3-serving-cert\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:43.597697 master-1 kubenswrapper[4740]: I1014 13:22:43.597488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-client-ca\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.597697 master-1 kubenswrapper[4740]: I1014 13:22:43.597614 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-config\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.598028 master-1 kubenswrapper[4740]: I1014 13:22:43.597781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-serving-cert\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.598101 master-1 kubenswrapper[4740]: I1014 13:22:43.598057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpz6r\" (UniqueName: \"kubernetes.io/projected/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-kube-api-access-hpz6r\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.598500 master-1 kubenswrapper[4740]: I1014 13:22:43.598443 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-proxy-ca-bundles\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.616347 master-1 kubenswrapper[4740]: I1014 13:22:43.604699 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-client-ca\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.616347 master-1 kubenswrapper[4740]: I1014 13:22:43.604990 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-serving-cert\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.616347 master-1 kubenswrapper[4740]: I1014 13:22:43.611960 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-config\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.616347 master-1 kubenswrapper[4740]: I1014 13:22:43.615400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-proxy-ca-bundles\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:43.983382 master-1 kubenswrapper[4740]: I1014 13:22:43.981000 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:22:44.645923 master-1 kubenswrapper[4740]: I1014 13:22:44.645826 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:22:44.650560 master-1 kubenswrapper[4740]: I1014 13:22:44.650512 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:22:44.953525 master-1 kubenswrapper[4740]: I1014 13:22:44.953381 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ae2a7e-b760-4dc0-8b0e-adb39439db3f" path="/var/lib/kubelet/pods/95ae2a7e-b760-4dc0-8b0e-adb39439db3f/volumes" Oct 14 13:22:44.953963 master-1 kubenswrapper[4740]: I1014 13:22:44.953926 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c8f12e-4b62-49eb-a466-af75a571c62f" path="/var/lib/kubelet/pods/e4c8f12e-4b62-49eb-a466-af75a571c62f/volumes" Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: I1014 13:22:45.612428 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:22:45.612517 master-1 kubenswrapper[4740]: I1014 13:22:45.612505 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:45.953797 master-1 kubenswrapper[4740]: I1014 13:22:45.953668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z72d\" (UniqueName: \"kubernetes.io/projected/8831d469-1dd6-492a-81e7-41fe30dbb6e3-kube-api-access-5z72d\") pod \"route-controller-manager-7968c6c999-vcjcn\" (UID: \"8831d469-1dd6-492a-81e7-41fe30dbb6e3\") " pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:46.109883 master-1 kubenswrapper[4740]: I1014 13:22:46.109780 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:48.128558 master-1 kubenswrapper[4740]: I1014 13:22:48.124817 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:48.144337 master-1 kubenswrapper[4740]: I1014 13:22:48.144149 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:22:48.619966 master-1 kubenswrapper[4740]: I1014 13:22:48.619871 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpz6r\" (UniqueName: \"kubernetes.io/projected/9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8-kube-api-access-hpz6r\") pod \"controller-manager-78c5d9fccd-2lzk5\" (UID: \"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8\") " pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:48.828552 master-1 kubenswrapper[4740]: I1014 13:22:48.828456 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:49.685111 master-1 kubenswrapper[4740]: I1014 13:22:49.685047 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn"] Oct 14 13:22:49.717953 master-1 kubenswrapper[4740]: W1014 13:22:49.717691 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8831d469_1dd6_492a_81e7_41fe30dbb6e3.slice/crio-7e99436f4cd39179e4175c296e41b7c9ab1f8d0f4311949475e72e9a44752920 WatchSource:0}: Error finding container 7e99436f4cd39179e4175c296e41b7c9ab1f8d0f4311949475e72e9a44752920: Status 404 returned error can't find the container with id 7e99436f4cd39179e4175c296e41b7c9ab1f8d0f4311949475e72e9a44752920 Oct 14 13:22:50.068759 master-1 kubenswrapper[4740]: I1014 13:22:50.068697 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" event={"ID":"8831d469-1dd6-492a-81e7-41fe30dbb6e3","Type":"ContainerStarted","Data":"87baaecddf27b2cd0a238d0782b38f5836f4f2b89db2166a91a79bde20223ac3"} Oct 14 13:22:50.069026 master-1 kubenswrapper[4740]: I1014 13:22:50.068769 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" event={"ID":"8831d469-1dd6-492a-81e7-41fe30dbb6e3","Type":"ContainerStarted","Data":"7e99436f4cd39179e4175c296e41b7c9ab1f8d0f4311949475e72e9a44752920"} Oct 14 13:22:50.069084 master-1 kubenswrapper[4740]: I1014 13:22:50.069059 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:51.069282 master-1 kubenswrapper[4740]: I1014 13:22:51.069180 4740 patch_prober.go:28] interesting pod/route-controller-manager-7968c6c999-vcjcn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.91:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:22:51.070074 master-1 kubenswrapper[4740]: I1014 13:22:51.069288 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" podUID="8831d469-1dd6-492a-81e7-41fe30dbb6e3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.91:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:22:52.077434 master-1 kubenswrapper[4740]: I1014 13:22:52.077332 4740 patch_prober.go:28] interesting pod/route-controller-manager-7968c6c999-vcjcn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.91:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:22:52.078378 master-1 kubenswrapper[4740]: I1014 13:22:52.077442 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" podUID="8831d469-1dd6-492a-81e7-41fe30dbb6e3" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.91:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:22:52.838729 master-1 kubenswrapper[4740]: I1014 13:22:52.838635 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5"] Oct 14 13:22:53.132271 master-1 kubenswrapper[4740]: I1014 13:22:53.123664 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" event={"ID":"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8","Type":"ContainerStarted","Data":"718e6943abce79f922bc7156cc0fd287230dc1783b5b6bc74610f6719e66a108"} Oct 14 13:22:53.174018 master-1 kubenswrapper[4740]: I1014 13:22:53.173935 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" podStartSLOduration=11.173917672 podStartE2EDuration="11.173917672s" podCreationTimestamp="2025-10-14 13:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:22:53.113421214 +0000 UTC m=+998.923710543" watchObservedRunningTime="2025-10-14 13:22:53.173917672 +0000 UTC m=+998.984207001" Oct 14 13:22:53.209806 master-1 kubenswrapper[4740]: I1014 13:22:53.209738 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-668956f9dd-mlrd8"] Oct 14 13:22:53.345998 master-1 kubenswrapper[4740]: I1014 13:22:53.345857 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-s9576"] Oct 14 13:22:53.346579 master-1 kubenswrapper[4740]: I1014 13:22:53.346526 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" containerID="cri-o://a239b7f63812583aa918ecca92d78715042d5630c3b5d976852ccf0f81559882" gracePeriod=120 Oct 14 13:22:53.579772 master-1 kubenswrapper[4740]: I1014 13:22:53.579642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:22:53.580053 master-1 kubenswrapper[4740]: I1014 13:22:53.579820 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cc579fa5-c1e0-40ed-b1f3-e953a42e74d6-etc-docker\") pod \"catalogd-controller-manager-596f9d8bbf-wn7c6\" (UID: \"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6\") " pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:22:53.681046 master-1 kubenswrapper[4740]: I1014 13:22:53.680918 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:22:53.681250 master-1 kubenswrapper[4740]: I1014 13:22:53.681079 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/180ced15-1fb1-464d-85f2-0bcc0d836dab-etc-docker\") pod \"operator-controller-controller-manager-668cb7cdc8-lwlfz\" (UID: \"180ced15-1fb1-464d-85f2-0bcc0d836dab\") " pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:22:53.802935 master-1 kubenswrapper[4740]: I1014 13:22:53.802884 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:22:53.803250 master-1 kubenswrapper[4740]: I1014 13:22:53.803195 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:22:54.131267 master-1 kubenswrapper[4740]: I1014 13:22:54.131146 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" event={"ID":"9ebf9e4d-a10a-4eda-bbe5-c2f806cc63f8","Type":"ContainerStarted","Data":"0bca4c82c989bfaf411265c73c0f0ac89aba0872d1ac45a5466db63124fc7d3c"} Oct 14 13:22:54.131584 master-1 kubenswrapper[4740]: I1014 13:22:54.131522 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:54.138371 master-1 kubenswrapper[4740]: I1014 13:22:54.138309 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" Oct 14 13:22:54.203984 master-1 kubenswrapper[4740]: I1014 13:22:54.203858 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5" podStartSLOduration=12.203817271 podStartE2EDuration="12.203817271s" podCreationTimestamp="2025-10-14 13:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:22:54.20029611 +0000 UTC m=+1000.010585459" watchObservedRunningTime="2025-10-14 13:22:54.203817271 +0000 UTC m=+1000.014106600" Oct 14 13:22:55.187465 master-1 kubenswrapper[4740]: I1014 13:22:55.187403 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz"] Oct 14 13:22:55.230158 master-1 kubenswrapper[4740]: I1014 13:22:55.201382 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6"] Oct 14 13:22:55.230158 master-1 kubenswrapper[4740]: W1014 13:22:55.208107 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc579fa5_c1e0_40ed_b1f3_e953a42e74d6.slice/crio-6c063835aca4cdc0972ded995daaad789e248983b30c57354a21060848f67324 WatchSource:0}: Error finding container 6c063835aca4cdc0972ded995daaad789e248983b30c57354a21060848f67324: Status 404 returned error can't find the container with id 6c063835aca4cdc0972ded995daaad789e248983b30c57354a21060848f67324 Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: I1014 13:22:55.239737 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:22:55.242542 master-1 kubenswrapper[4740]: I1014 13:22:55.240126 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:22:56.117011 master-1 kubenswrapper[4740]: I1014 13:22:56.116945 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn" Oct 14 13:22:56.168180 master-1 kubenswrapper[4740]: I1014 13:22:56.168087 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" event={"ID":"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6","Type":"ContainerStarted","Data":"b37acdb17cf04e93a0fa62db4c6e431746b497cdd8cf2fc840d52d99251de7fd"} Oct 14 13:22:56.168180 master-1 kubenswrapper[4740]: I1014 13:22:56.168160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" event={"ID":"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6","Type":"ContainerStarted","Data":"312cbd71d6e41c45818c426b9c52f36007872b2b4fa84311ca432ad026b45ff8"} Oct 14 13:22:56.168180 master-1 kubenswrapper[4740]: I1014 13:22:56.168175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" event={"ID":"cc579fa5-c1e0-40ed-b1f3-e953a42e74d6","Type":"ContainerStarted","Data":"6c063835aca4cdc0972ded995daaad789e248983b30c57354a21060848f67324"} Oct 14 13:22:56.168632 master-1 kubenswrapper[4740]: I1014 13:22:56.168281 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:22:56.170341 master-1 kubenswrapper[4740]: I1014 13:22:56.170279 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" event={"ID":"180ced15-1fb1-464d-85f2-0bcc0d836dab","Type":"ContainerStarted","Data":"ad69095216721b29e77c29b1017314bd8a0814445a3f693603cccbe7f620af7e"} Oct 14 13:22:56.170438 master-1 kubenswrapper[4740]: I1014 13:22:56.170343 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" event={"ID":"180ced15-1fb1-464d-85f2-0bcc0d836dab","Type":"ContainerStarted","Data":"4651f5a15763322130ea2750a441cd84430f293687c5a4fdd9ec01b0add3f90c"} Oct 14 13:22:56.170438 master-1 kubenswrapper[4740]: I1014 13:22:56.170366 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" event={"ID":"180ced15-1fb1-464d-85f2-0bcc0d836dab","Type":"ContainerStarted","Data":"ccafb3e6bb5d3f845369c8e64772dd1df13ac0d67673235ea416b8ef6034cff7"} Oct 14 13:22:57.185394 master-1 kubenswrapper[4740]: I1014 13:22:57.184826 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:22:57.865441 master-1 kubenswrapper[4740]: I1014 13:22:57.865350 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" podStartSLOduration=864.865331613 podStartE2EDuration="14m24.865331613s" podCreationTimestamp="2025-10-14 13:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:22:57.860833186 +0000 UTC m=+1003.671122515" watchObservedRunningTime="2025-10-14 13:22:57.865331613 +0000 UTC m=+1003.675620942" Oct 14 13:22:57.908247 master-1 kubenswrapper[4740]: I1014 13:22:57.908157 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" podStartSLOduration=864.908139063 podStartE2EDuration="14m24.908139063s" podCreationTimestamp="2025-10-14 13:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:22:57.90493391 +0000 UTC m=+1003.715223259" watchObservedRunningTime="2025-10-14 13:22:57.908139063 +0000 UTC m=+1003.718428392" Oct 14 13:22:58.022367 master-1 kubenswrapper[4740]: I1014 13:22:58.022190 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-595d5f74d8-hck8v"] Oct 14 13:22:58.023729 master-1 kubenswrapper[4740]: I1014 13:22:58.022909 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" containerID="cri-o://194c25a7f27d321abe7b43f432aa05c8f7acba7f239a24bf7b4072916b25b5f2" gracePeriod=120 Oct 14 13:22:58.023729 master-1 kubenswrapper[4740]: I1014 13:22:58.023346 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://1d3ba628773d880348e99b016c5d83127177dbbd2f44204a133e0dcdcec7087c" gracePeriod=120 Oct 14 13:22:58.194314 master-1 kubenswrapper[4740]: I1014 13:22:58.194252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerDied","Data":"1d3ba628773d880348e99b016c5d83127177dbbd2f44204a133e0dcdcec7087c"} Oct 14 13:22:58.194314 master-1 kubenswrapper[4740]: I1014 13:22:58.194311 4740 generic.go:334] "Generic (PLEG): container finished" podID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerID="1d3ba628773d880348e99b016c5d83127177dbbd2f44204a133e0dcdcec7087c" exitCode=0 Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: I1014 13:23:00.211927 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:00.212034 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:00.213207 master-1 kubenswrapper[4740]: I1014 13:23:00.212391 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: I1014 13:23:02.815805 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:02.815892 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:02.818005 master-1 kubenswrapper[4740]: I1014 13:23:02.815898 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:03.806381 master-1 kubenswrapper[4740]: I1014 13:23:03.806293 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6" Oct 14 13:23:03.806854 master-1 kubenswrapper[4740]: I1014 13:23:03.806712 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz" Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: I1014 13:23:05.213714 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:05.213809 master-1 kubenswrapper[4740]: I1014 13:23:05.213794 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:05.215441 master-1 kubenswrapper[4740]: I1014 13:23:05.213874 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: I1014 13:23:05.613266 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:05.613841 master-1 kubenswrapper[4740]: I1014 13:23:05.613377 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:05.746611 master-1 kubenswrapper[4740]: I1014 13:23:05.746511 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:23:05.797181 master-1 kubenswrapper[4740]: I1014 13:23:05.797131 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:23:06.316416 master-1 kubenswrapper[4740]: I1014 13:23:06.316269 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: I1014 13:23:07.811391 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:07.811450 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:07.812585 master-1 kubenswrapper[4740]: I1014 13:23:07.811456 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:10.084955 master-1 kubenswrapper[4740]: I1014 13:23:10.084850 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-mzrkb_ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67/assisted-installer-controller/0.log" Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: I1014 13:23:10.213444 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:10.213551 master-1 kubenswrapper[4740]: I1014 13:23:10.213525 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: I1014 13:23:12.814685 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:12.814825 master-1 kubenswrapper[4740]: I1014 13:23:12.814800 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:12.817043 master-1 kubenswrapper[4740]: I1014 13:23:12.814999 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: I1014 13:23:15.210445 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:15.210549 master-1 kubenswrapper[4740]: I1014 13:23:15.210523 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: I1014 13:23:17.819112 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:17.819202 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:17.821541 master-1 kubenswrapper[4740]: I1014 13:23:17.819214 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:18.061539 master-1 kubenswrapper[4740]: I1014 13:23:18.061464 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf"] Oct 14 13:23:18.246045 master-1 kubenswrapper[4740]: I1014 13:23:18.241937 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-668956f9dd-mlrd8" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" containerID="cri-o://e39245116eb198b69028ed732077ffccaa450a3f2e0c328aea1700b8957f8d11" gracePeriod=15 Oct 14 13:23:18.369294 master-1 kubenswrapper[4740]: I1014 13:23:18.369114 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-668956f9dd-mlrd8_9a83514f-e8a3-4a35-aaa4-cc530166fc2f/console/0.log" Oct 14 13:23:18.369294 master-1 kubenswrapper[4740]: I1014 13:23:18.369183 4740 generic.go:334] "Generic (PLEG): container finished" podID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerID="e39245116eb198b69028ed732077ffccaa450a3f2e0c328aea1700b8957f8d11" exitCode=2 Oct 14 13:23:18.369294 master-1 kubenswrapper[4740]: I1014 13:23:18.369217 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668956f9dd-mlrd8" event={"ID":"9a83514f-e8a3-4a35-aaa4-cc530166fc2f","Type":"ContainerDied","Data":"e39245116eb198b69028ed732077ffccaa450a3f2e0c328aea1700b8957f8d11"} Oct 14 13:23:18.853691 master-1 kubenswrapper[4740]: I1014 13:23:18.853649 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-668956f9dd-mlrd8_9a83514f-e8a3-4a35-aaa4-cc530166fc2f/console/0.log" Oct 14 13:23:18.854136 master-1 kubenswrapper[4740]: I1014 13:23:18.853730 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:23:18.961135 master-1 kubenswrapper[4740]: I1014 13:23:18.961022 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-config\") pod \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " Oct 14 13:23:18.961135 master-1 kubenswrapper[4740]: I1014 13:23:18.961126 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-service-ca\") pod \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " Oct 14 13:23:18.961609 master-1 kubenswrapper[4740]: I1014 13:23:18.961189 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-oauth-serving-cert\") pod \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " Oct 14 13:23:18.962420 master-1 kubenswrapper[4740]: I1014 13:23:18.961775 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-oauth-config\") pod \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " Oct 14 13:23:18.962420 master-1 kubenswrapper[4740]: I1014 13:23:18.961861 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-serving-cert\") pod \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " Oct 14 13:23:18.962420 master-1 kubenswrapper[4740]: I1014 13:23:18.961926 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm7fk\" (UniqueName: \"kubernetes.io/projected/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-kube-api-access-wm7fk\") pod \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\" (UID: \"9a83514f-e8a3-4a35-aaa4-cc530166fc2f\") " Oct 14 13:23:18.962420 master-1 kubenswrapper[4740]: I1014 13:23:18.962340 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-service-ca" (OuterVolumeSpecName: "service-ca") pod "9a83514f-e8a3-4a35-aaa4-cc530166fc2f" (UID: "9a83514f-e8a3-4a35-aaa4-cc530166fc2f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:18.962420 master-1 kubenswrapper[4740]: I1014 13:23:18.962398 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9a83514f-e8a3-4a35-aaa4-cc530166fc2f" (UID: "9a83514f-e8a3-4a35-aaa4-cc530166fc2f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:18.963164 master-1 kubenswrapper[4740]: I1014 13:23:18.962925 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:18.963164 master-1 kubenswrapper[4740]: I1014 13:23:18.962948 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-oauth-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:18.963471 master-1 kubenswrapper[4740]: I1014 13:23:18.963411 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-config" (OuterVolumeSpecName: "console-config") pod "9a83514f-e8a3-4a35-aaa4-cc530166fc2f" (UID: "9a83514f-e8a3-4a35-aaa4-cc530166fc2f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:18.965060 master-1 kubenswrapper[4740]: I1014 13:23:18.965010 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9a83514f-e8a3-4a35-aaa4-cc530166fc2f" (UID: "9a83514f-e8a3-4a35-aaa4-cc530166fc2f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:18.967264 master-1 kubenswrapper[4740]: I1014 13:23:18.967164 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9a83514f-e8a3-4a35-aaa4-cc530166fc2f" (UID: "9a83514f-e8a3-4a35-aaa4-cc530166fc2f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:18.967676 master-1 kubenswrapper[4740]: I1014 13:23:18.967623 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-kube-api-access-wm7fk" (OuterVolumeSpecName: "kube-api-access-wm7fk") pod "9a83514f-e8a3-4a35-aaa4-cc530166fc2f" (UID: "9a83514f-e8a3-4a35-aaa4-cc530166fc2f"). InnerVolumeSpecName "kube-api-access-wm7fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:19.065491 master-1 kubenswrapper[4740]: I1014 13:23:19.065024 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:19.065491 master-1 kubenswrapper[4740]: I1014 13:23:19.065085 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:19.065491 master-1 kubenswrapper[4740]: I1014 13:23:19.065109 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-console-oauth-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:19.065491 master-1 kubenswrapper[4740]: I1014 13:23:19.065128 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm7fk\" (UniqueName: \"kubernetes.io/projected/9a83514f-e8a3-4a35-aaa4-cc530166fc2f-kube-api-access-wm7fk\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:19.378778 master-1 kubenswrapper[4740]: I1014 13:23:19.378630 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-668956f9dd-mlrd8_9a83514f-e8a3-4a35-aaa4-cc530166fc2f/console/0.log" Oct 14 13:23:19.378778 master-1 kubenswrapper[4740]: I1014 13:23:19.378725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-668956f9dd-mlrd8" event={"ID":"9a83514f-e8a3-4a35-aaa4-cc530166fc2f","Type":"ContainerDied","Data":"09d264ac75e76a234bfd604dd8a9108f6dd703393cb192c91624d7f9d9e426ed"} Oct 14 13:23:19.379050 master-1 kubenswrapper[4740]: I1014 13:23:19.378790 4740 scope.go:117] "RemoveContainer" containerID="e39245116eb198b69028ed732077ffccaa450a3f2e0c328aea1700b8957f8d11" Oct 14 13:23:19.379050 master-1 kubenswrapper[4740]: I1014 13:23:19.378965 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-668956f9dd-mlrd8" Oct 14 13:23:19.436059 master-1 kubenswrapper[4740]: I1014 13:23:19.435945 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-668956f9dd-mlrd8"] Oct 14 13:23:19.440321 master-1 kubenswrapper[4740]: I1014 13:23:19.440221 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-668956f9dd-mlrd8"] Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: I1014 13:23:20.210951 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:20.211064 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:20.212059 master-1 kubenswrapper[4740]: I1014 13:23:20.211110 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:20.952544 master-1 kubenswrapper[4740]: I1014 13:23:20.952493 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" path="/var/lib/kubelet/pods/9a83514f-e8a3-4a35-aaa4-cc530166fc2f/volumes" Oct 14 13:23:21.161139 master-1 kubenswrapper[4740]: I1014 13:23:21.161085 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-77d8f866f9-skvf6"] Oct 14 13:23:21.161831 master-1 kubenswrapper[4740]: E1014 13:23:21.161808 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" Oct 14 13:23:21.161937 master-1 kubenswrapper[4740]: I1014 13:23:21.161924 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" Oct 14 13:23:21.162149 master-1 kubenswrapper[4740]: I1014 13:23:21.162134 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a83514f-e8a3-4a35-aaa4-cc530166fc2f" containerName="console" Oct 14 13:23:21.162861 master-1 kubenswrapper[4740]: I1014 13:23:21.162841 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.169184 master-1 kubenswrapper[4740]: I1014 13:23:21.169126 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 13:23:21.169351 master-1 kubenswrapper[4740]: I1014 13:23:21.169332 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r2r7j" Oct 14 13:23:21.170098 master-1 kubenswrapper[4740]: I1014 13:23:21.170032 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 13:23:21.170098 master-1 kubenswrapper[4740]: I1014 13:23:21.170079 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 13:23:21.170289 master-1 kubenswrapper[4740]: I1014 13:23:21.170203 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 13:23:21.170450 master-1 kubenswrapper[4740]: I1014 13:23:21.170419 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 13:23:21.179314 master-1 kubenswrapper[4740]: I1014 13:23:21.179282 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 14 13:23:21.301374 master-1 kubenswrapper[4740]: I1014 13:23:21.301293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-oauth-serving-cert\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.301374 master-1 kubenswrapper[4740]: I1014 13:23:21.301342 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-serving-cert\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.302423 master-1 kubenswrapper[4740]: I1014 13:23:21.301414 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-config\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.302423 master-1 kubenswrapper[4740]: I1014 13:23:21.301445 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-oauth-config\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.302423 master-1 kubenswrapper[4740]: I1014 13:23:21.301467 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-trusted-ca-bundle\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.302423 master-1 kubenswrapper[4740]: I1014 13:23:21.301502 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7w6g\" (UniqueName: \"kubernetes.io/projected/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-kube-api-access-b7w6g\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.302423 master-1 kubenswrapper[4740]: I1014 13:23:21.301545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-service-ca\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.342932 master-1 kubenswrapper[4740]: I1014 13:23:21.342834 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77d8f866f9-skvf6"] Oct 14 13:23:21.402512 master-1 kubenswrapper[4740]: I1014 13:23:21.402424 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-oauth-config\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.402512 master-1 kubenswrapper[4740]: I1014 13:23:21.402486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-trusted-ca-bundle\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.402862 master-1 kubenswrapper[4740]: I1014 13:23:21.402531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7w6g\" (UniqueName: \"kubernetes.io/projected/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-kube-api-access-b7w6g\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.402862 master-1 kubenswrapper[4740]: I1014 13:23:21.402576 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-service-ca\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.402862 master-1 kubenswrapper[4740]: I1014 13:23:21.402627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-oauth-serving-cert\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.402862 master-1 kubenswrapper[4740]: I1014 13:23:21.402648 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-serving-cert\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.402862 master-1 kubenswrapper[4740]: I1014 13:23:21.402709 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-config\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.403930 master-1 kubenswrapper[4740]: I1014 13:23:21.403888 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-config\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.404091 master-1 kubenswrapper[4740]: I1014 13:23:21.404050 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-service-ca\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.404893 master-1 kubenswrapper[4740]: I1014 13:23:21.404842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-oauth-serving-cert\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.405871 master-1 kubenswrapper[4740]: I1014 13:23:21.405830 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-trusted-ca-bundle\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.406801 master-1 kubenswrapper[4740]: I1014 13:23:21.406737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-oauth-config\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.408393 master-1 kubenswrapper[4740]: I1014 13:23:21.408346 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-serving-cert\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.594293 master-1 kubenswrapper[4740]: I1014 13:23:21.594166 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7w6g\" (UniqueName: \"kubernetes.io/projected/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-kube-api-access-b7w6g\") pod \"console-77d8f866f9-skvf6\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:21.780198 master-1 kubenswrapper[4740]: I1014 13:23:21.780087 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:22.231643 master-1 kubenswrapper[4740]: I1014 13:23:22.231571 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77d8f866f9-skvf6"] Oct 14 13:23:22.239461 master-1 kubenswrapper[4740]: W1014 13:23:22.239394 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe87fbd6_00fb_4304_b1c8_70ff91c6b278.slice/crio-03e1bae33777efe1bd0baf164ff5ad35bbfa1d3bd4a412da0313adfcc87a5400 WatchSource:0}: Error finding container 03e1bae33777efe1bd0baf164ff5ad35bbfa1d3bd4a412da0313adfcc87a5400: Status 404 returned error can't find the container with id 03e1bae33777efe1bd0baf164ff5ad35bbfa1d3bd4a412da0313adfcc87a5400 Oct 14 13:23:22.406170 master-1 kubenswrapper[4740]: I1014 13:23:22.406086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77d8f866f9-skvf6" event={"ID":"fe87fbd6-00fb-4304-b1c8-70ff91c6b278","Type":"ContainerStarted","Data":"f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2"} Oct 14 13:23:22.406170 master-1 kubenswrapper[4740]: I1014 13:23:22.406158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77d8f866f9-skvf6" event={"ID":"fe87fbd6-00fb-4304-b1c8-70ff91c6b278","Type":"ContainerStarted","Data":"03e1bae33777efe1bd0baf164ff5ad35bbfa1d3bd4a412da0313adfcc87a5400"} Oct 14 13:23:22.432362 master-1 kubenswrapper[4740]: I1014 13:23:22.432207 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77d8f866f9-skvf6" podStartSLOduration=2.43218489 podStartE2EDuration="2.43218489s" podCreationTimestamp="2025-10-14 13:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:23:22.426934284 +0000 UTC m=+1028.237223643" watchObservedRunningTime="2025-10-14 13:23:22.43218489 +0000 UTC m=+1028.242474229" Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: I1014 13:23:22.811908 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:22.811974 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:22.812731 master-1 kubenswrapper[4740]: I1014 13:23:22.812000 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:23.203598 master-1 kubenswrapper[4740]: I1014 13:23:23.203461 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 14 13:23:23.203988 master-1 kubenswrapper[4740]: I1014 13:23:23.203776 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="alertmanager" containerID="cri-o://d6fbd521b7e482875c76bbbf31905dd68738819cc22f806fcdfa74994c0357c3" gracePeriod=120 Oct 14 13:23:23.203988 master-1 kubenswrapper[4740]: I1014 13:23:23.203826 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-web" containerID="cri-o://2bab329603dda3b4c9b113215f87430323c4479f0804295ed235b9f0cdcfd9da" gracePeriod=120 Oct 14 13:23:23.203988 master-1 kubenswrapper[4740]: I1014 13:23:23.203880 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="prom-label-proxy" containerID="cri-o://81bb6172d8fac973a83106863ff1970861e0f56fc47a0169b6bfb8b4e383deb0" gracePeriod=120 Oct 14 13:23:23.203988 master-1 kubenswrapper[4740]: I1014 13:23:23.203826 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-metric" containerID="cri-o://865870b6b49f0cb5a23675fad0cb08752b49e92a717e00ab381a0955ca070aa7" gracePeriod=120 Oct 14 13:23:23.204285 master-1 kubenswrapper[4740]: I1014 13:23:23.203920 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="config-reloader" containerID="cri-o://896437c579da16931c104f320f48e66ad3bdacca0402b226cbf829c7415c8533" gracePeriod=120 Oct 14 13:23:23.204285 master-1 kubenswrapper[4740]: I1014 13:23:23.203996 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-1" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy" containerID="cri-o://2d35af07a49e7f21f0ba554ddc9bea2d97b4fcbacd5c0e98974581e6d7435ea4" gracePeriod=120 Oct 14 13:23:23.419031 master-1 kubenswrapper[4740]: I1014 13:23:23.418990 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="81bb6172d8fac973a83106863ff1970861e0f56fc47a0169b6bfb8b4e383deb0" exitCode=0 Oct 14 13:23:23.419031 master-1 kubenswrapper[4740]: I1014 13:23:23.419026 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="2d35af07a49e7f21f0ba554ddc9bea2d97b4fcbacd5c0e98974581e6d7435ea4" exitCode=0 Oct 14 13:23:23.419031 master-1 kubenswrapper[4740]: I1014 13:23:23.419039 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="896437c579da16931c104f320f48e66ad3bdacca0402b226cbf829c7415c8533" exitCode=0 Oct 14 13:23:23.419590 master-1 kubenswrapper[4740]: I1014 13:23:23.419036 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"81bb6172d8fac973a83106863ff1970861e0f56fc47a0169b6bfb8b4e383deb0"} Oct 14 13:23:23.419590 master-1 kubenswrapper[4740]: I1014 13:23:23.419103 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"2d35af07a49e7f21f0ba554ddc9bea2d97b4fcbacd5c0e98974581e6d7435ea4"} Oct 14 13:23:23.419590 master-1 kubenswrapper[4740]: I1014 13:23:23.419125 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"896437c579da16931c104f320f48e66ad3bdacca0402b226cbf829c7415c8533"} Oct 14 13:23:24.438270 master-1 kubenswrapper[4740]: I1014 13:23:24.438181 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="865870b6b49f0cb5a23675fad0cb08752b49e92a717e00ab381a0955ca070aa7" exitCode=0 Oct 14 13:23:24.439085 master-1 kubenswrapper[4740]: I1014 13:23:24.438287 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="2bab329603dda3b4c9b113215f87430323c4479f0804295ed235b9f0cdcfd9da" exitCode=0 Oct 14 13:23:24.439085 master-1 kubenswrapper[4740]: I1014 13:23:24.438321 4740 generic.go:334] "Generic (PLEG): container finished" podID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerID="d6fbd521b7e482875c76bbbf31905dd68738819cc22f806fcdfa74994c0357c3" exitCode=0 Oct 14 13:23:24.439085 master-1 kubenswrapper[4740]: I1014 13:23:24.438362 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"865870b6b49f0cb5a23675fad0cb08752b49e92a717e00ab381a0955ca070aa7"} Oct 14 13:23:24.439085 master-1 kubenswrapper[4740]: I1014 13:23:24.438558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"2bab329603dda3b4c9b113215f87430323c4479f0804295ed235b9f0cdcfd9da"} Oct 14 13:23:24.439085 master-1 kubenswrapper[4740]: I1014 13:23:24.438630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"d6fbd521b7e482875c76bbbf31905dd68738819cc22f806fcdfa74994c0357c3"} Oct 14 13:23:24.850207 master-1 kubenswrapper[4740]: I1014 13:23:24.850148 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:23:24.998483 master-1 kubenswrapper[4740]: I1014 13:23:24.998411 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-config-out\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.998750 master-1 kubenswrapper[4740]: I1014 13:23:24.998519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv2kd\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-kube-api-access-hv2kd\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.998750 master-1 kubenswrapper[4740]: I1014 13:23:24.998583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-trusted-ca-bundle\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.998750 master-1 kubenswrapper[4740]: I1014 13:23:24.998671 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-main-tls\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.998750 master-1 kubenswrapper[4740]: I1014 13:23:24.998736 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-metric\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.998922 master-1 kubenswrapper[4740]: I1014 13:23:24.998774 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-web\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.998922 master-1 kubenswrapper[4740]: I1014 13:23:24.998801 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-web-config\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.999538 master-1 kubenswrapper[4740]: I1014 13:23:24.999500 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-tls-assets\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.999606 master-1 kubenswrapper[4740]: I1014 13:23:24.999545 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.999651 master-1 kubenswrapper[4740]: I1014 13:23:24.999559 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:24.999697 master-1 kubenswrapper[4740]: I1014 13:23:24.999588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-metrics-client-ca\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.999803 master-1 kubenswrapper[4740]: I1014 13:23:24.999761 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-config-volume\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:24.999873 master-1 kubenswrapper[4740]: I1014 13:23:24.999840 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-main-db\") pod \"3e010854-ec42-42d1-8865-0fe4c78214ef\" (UID: \"3e010854-ec42-42d1-8865-0fe4c78214ef\") " Oct 14 13:23:25.000372 master-1 kubenswrapper[4740]: I1014 13:23:25.000330 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:25.001410 master-1 kubenswrapper[4740]: I1014 13:23:25.001343 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:23:25.001494 master-1 kubenswrapper[4740]: I1014 13:23:25.001463 4740 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.001541 master-1 kubenswrapper[4740]: I1014 13:23:25.001495 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3e010854-ec42-42d1-8865-0fe4c78214ef-metrics-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.001957 master-1 kubenswrapper[4740]: I1014 13:23:25.001912 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-kube-api-access-hv2kd" (OuterVolumeSpecName: "kube-api-access-hv2kd") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "kube-api-access-hv2kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:25.002013 master-1 kubenswrapper[4740]: I1014 13:23:25.001912 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:25.002406 master-1 kubenswrapper[4740]: I1014 13:23:25.002361 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:25.003006 master-1 kubenswrapper[4740]: I1014 13:23:25.002973 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:25.003079 master-1 kubenswrapper[4740]: I1014 13:23:25.003040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:25.004052 master-1 kubenswrapper[4740]: I1014 13:23:25.004001 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:25.005724 master-1 kubenswrapper[4740]: I1014 13:23:25.005637 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-config-out" (OuterVolumeSpecName: "config-out") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:23:25.019180 master-1 kubenswrapper[4740]: I1014 13:23:25.019109 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:25.069905 master-1 kubenswrapper[4740]: I1014 13:23:25.069844 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-web-config" (OuterVolumeSpecName: "web-config") pod "3e010854-ec42-42d1-8865-0fe4c78214ef" (UID: "3e010854-ec42-42d1-8865-0fe4c78214ef"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:25.103367 master-1 kubenswrapper[4740]: I1014 13:23:25.103291 4740 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-main-tls\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103367 master-1 kubenswrapper[4740]: I1014 13:23:25.103341 4740 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103367 master-1 kubenswrapper[4740]: I1014 13:23:25.103358 4740 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103367 master-1 kubenswrapper[4740]: I1014 13:23:25.103373 4740 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-web-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103702 master-1 kubenswrapper[4740]: I1014 13:23:25.103385 4740 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-tls-assets\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103702 master-1 kubenswrapper[4740]: I1014 13:23:25.103400 4740 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-secret-alertmanager-kube-rbac-proxy\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103702 master-1 kubenswrapper[4740]: I1014 13:23:25.103413 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3e010854-ec42-42d1-8865-0fe4c78214ef-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103702 master-1 kubenswrapper[4740]: I1014 13:23:25.103428 4740 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-alertmanager-main-db\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103702 master-1 kubenswrapper[4740]: I1014 13:23:25.103441 4740 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e010854-ec42-42d1-8865-0fe4c78214ef-config-out\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.103702 master-1 kubenswrapper[4740]: I1014 13:23:25.103452 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv2kd\" (UniqueName: \"kubernetes.io/projected/3e010854-ec42-42d1-8865-0fe4c78214ef-kube-api-access-hv2kd\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: I1014 13:23:25.211778 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:25.211868 master-1 kubenswrapper[4740]: I1014 13:23:25.211857 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:25.455444 master-1 kubenswrapper[4740]: I1014 13:23:25.455262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-1" event={"ID":"3e010854-ec42-42d1-8865-0fe4c78214ef","Type":"ContainerDied","Data":"d2afd0bccf90d81ae2b279c246a03cd4870951d63fc4374bfc53d36696793b56"} Oct 14 13:23:25.455444 master-1 kubenswrapper[4740]: I1014 13:23:25.455371 4740 scope.go:117] "RemoveContainer" containerID="81bb6172d8fac973a83106863ff1970861e0f56fc47a0169b6bfb8b4e383deb0" Oct 14 13:23:25.456489 master-1 kubenswrapper[4740]: I1014 13:23:25.456453 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-1" Oct 14 13:23:25.485541 master-1 kubenswrapper[4740]: I1014 13:23:25.485472 4740 scope.go:117] "RemoveContainer" containerID="865870b6b49f0cb5a23675fad0cb08752b49e92a717e00ab381a0955ca070aa7" Oct 14 13:23:25.534041 master-1 kubenswrapper[4740]: I1014 13:23:25.533995 4740 scope.go:117] "RemoveContainer" containerID="2d35af07a49e7f21f0ba554ddc9bea2d97b4fcbacd5c0e98974581e6d7435ea4" Oct 14 13:23:25.551582 master-1 kubenswrapper[4740]: I1014 13:23:25.551515 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 14 13:23:25.558495 master-1 kubenswrapper[4740]: I1014 13:23:25.558455 4740 scope.go:117] "RemoveContainer" containerID="2bab329603dda3b4c9b113215f87430323c4479f0804295ed235b9f0cdcfd9da" Oct 14 13:23:25.575825 master-1 kubenswrapper[4740]: I1014 13:23:25.575728 4740 scope.go:117] "RemoveContainer" containerID="896437c579da16931c104f320f48e66ad3bdacca0402b226cbf829c7415c8533" Oct 14 13:23:25.589532 master-1 kubenswrapper[4740]: I1014 13:23:25.589475 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-1"] Oct 14 13:23:25.597467 master-1 kubenswrapper[4740]: I1014 13:23:25.597399 4740 scope.go:117] "RemoveContainer" containerID="d6fbd521b7e482875c76bbbf31905dd68738819cc22f806fcdfa74994c0357c3" Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: I1014 13:23:25.613995 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:25.614078 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:25.614656 master-1 kubenswrapper[4740]: I1014 13:23:25.614078 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:25.617052 master-1 kubenswrapper[4740]: I1014 13:23:25.617029 4740 scope.go:117] "RemoveContainer" containerID="a84aecc46913d9e8fc0c5cbda4b2f3b75b648a397381adaad0e904bcace46824" Oct 14 13:23:26.958105 master-1 kubenswrapper[4740]: I1014 13:23:26.958047 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" path="/var/lib/kubelet/pods/3e010854-ec42-42d1-8865-0fe4c78214ef/volumes" Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: I1014 13:23:27.812398 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:27.812481 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:27.813386 master-1 kubenswrapper[4740]: I1014 13:23:27.812499 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:27.825270 master-1 kubenswrapper[4740]: I1014 13:23:27.825185 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 14 13:23:27.825846 master-1 kubenswrapper[4740]: I1014 13:23:27.825801 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="prometheus" containerID="cri-o://38eaa2b002f57fd158787266306bcacdb5e72b8d03c630b6fdb586b70cd5b78c" gracePeriod=600 Oct 14 13:23:27.826057 master-1 kubenswrapper[4740]: I1014 13:23:27.825966 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="thanos-sidecar" containerID="cri-o://a836f0f0d731ba4ebc1d5f5e51a85585abeecbda30cc3a088b3ec77311ff5bed" gracePeriod=600 Oct 14 13:23:27.826152 master-1 kubenswrapper[4740]: I1014 13:23:27.826000 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="config-reloader" containerID="cri-o://b61c1ab1ec698919e1b5cef271aec9037b0600ce60d4916637ddb3a39c701d95" gracePeriod=600 Oct 14 13:23:27.826280 master-1 kubenswrapper[4740]: I1014 13:23:27.826129 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy" containerID="cri-o://f1ea437af65c58aa9a7defa07101efbb33a229bc2ca4bbc295be92bcd032e893" gracePeriod=600 Oct 14 13:23:27.826889 master-1 kubenswrapper[4740]: I1014 13:23:27.826348 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-web" containerID="cri-o://bf6d32c0ab07062e4cf2faa0fb3f11b49404272e70cf25e281d742b6cc15fdbe" gracePeriod=600 Oct 14 13:23:27.827055 master-1 kubenswrapper[4740]: I1014 13:23:27.826460 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-1" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-thanos" containerID="cri-o://5ac1218809d0fc572cfec08d0c990ed62a777d84382fd79cdbb8e11b45766b3d" gracePeriod=600 Oct 14 13:23:28.483898 master-1 kubenswrapper[4740]: I1014 13:23:28.483852 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="bf6d32c0ab07062e4cf2faa0fb3f11b49404272e70cf25e281d742b6cc15fdbe" exitCode=0 Oct 14 13:23:28.484530 master-1 kubenswrapper[4740]: I1014 13:23:28.484488 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="a836f0f0d731ba4ebc1d5f5e51a85585abeecbda30cc3a088b3ec77311ff5bed" exitCode=0 Oct 14 13:23:28.484631 master-1 kubenswrapper[4740]: I1014 13:23:28.484614 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="b61c1ab1ec698919e1b5cef271aec9037b0600ce60d4916637ddb3a39c701d95" exitCode=0 Oct 14 13:23:28.484727 master-1 kubenswrapper[4740]: I1014 13:23:28.483943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"bf6d32c0ab07062e4cf2faa0fb3f11b49404272e70cf25e281d742b6cc15fdbe"} Oct 14 13:23:28.484825 master-1 kubenswrapper[4740]: I1014 13:23:28.484777 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"a836f0f0d731ba4ebc1d5f5e51a85585abeecbda30cc3a088b3ec77311ff5bed"} Oct 14 13:23:28.484879 master-1 kubenswrapper[4740]: I1014 13:23:28.484824 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"b61c1ab1ec698919e1b5cef271aec9037b0600ce60d4916637ddb3a39c701d95"} Oct 14 13:23:28.484879 master-1 kubenswrapper[4740]: I1014 13:23:28.484839 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"38eaa2b002f57fd158787266306bcacdb5e72b8d03c630b6fdb586b70cd5b78c"} Oct 14 13:23:28.484879 master-1 kubenswrapper[4740]: I1014 13:23:28.484693 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="38eaa2b002f57fd158787266306bcacdb5e72b8d03c630b6fdb586b70cd5b78c" exitCode=0 Oct 14 13:23:29.094530 master-1 kubenswrapper[4740]: E1014 13:23:29.094420 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6539b776_6f11_4e9c_b195_cb354732ac2c.slice/crio-conmon-f1ea437af65c58aa9a7defa07101efbb33a229bc2ca4bbc295be92bcd032e893.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6539b776_6f11_4e9c_b195_cb354732ac2c.slice/crio-conmon-5ac1218809d0fc572cfec08d0c990ed62a777d84382fd79cdbb8e11b45766b3d.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:23:29.503432 master-1 kubenswrapper[4740]: I1014 13:23:29.503366 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="5ac1218809d0fc572cfec08d0c990ed62a777d84382fd79cdbb8e11b45766b3d" exitCode=0 Oct 14 13:23:29.503432 master-1 kubenswrapper[4740]: I1014 13:23:29.503414 4740 generic.go:334] "Generic (PLEG): container finished" podID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerID="f1ea437af65c58aa9a7defa07101efbb33a229bc2ca4bbc295be92bcd032e893" exitCode=0 Oct 14 13:23:29.504277 master-1 kubenswrapper[4740]: I1014 13:23:29.503450 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"5ac1218809d0fc572cfec08d0c990ed62a777d84382fd79cdbb8e11b45766b3d"} Oct 14 13:23:29.504277 master-1 kubenswrapper[4740]: I1014 13:23:29.503482 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"f1ea437af65c58aa9a7defa07101efbb33a229bc2ca4bbc295be92bcd032e893"} Oct 14 13:23:29.504277 master-1 kubenswrapper[4740]: I1014 13:23:29.503496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-1" event={"ID":"6539b776-6f11-4e9c-b195-cb354732ac2c","Type":"ContainerDied","Data":"33962ac369a7e77322dad7b7f85a4a76376c077e39f196dc4a3286462fde03f6"} Oct 14 13:23:29.504277 master-1 kubenswrapper[4740]: I1014 13:23:29.503525 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33962ac369a7e77322dad7b7f85a4a76376c077e39f196dc4a3286462fde03f6" Oct 14 13:23:29.523334 master-1 kubenswrapper[4740]: I1014 13:23:29.523297 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:23:29.692279 master-1 kubenswrapper[4740]: I1014 13:23:29.692192 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-kubelet-serving-ca-bundle\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692498 master-1 kubenswrapper[4740]: I1014 13:23:29.692290 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-trusted-ca-bundle\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692498 master-1 kubenswrapper[4740]: I1014 13:23:29.692337 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bsn5\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-kube-api-access-5bsn5\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692498 master-1 kubenswrapper[4740]: I1014 13:23:29.692374 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-web-config\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692498 master-1 kubenswrapper[4740]: I1014 13:23:29.692416 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-thanos-prometheus-http-client-file\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692498 master-1 kubenswrapper[4740]: I1014 13:23:29.692456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-config-out\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692498 master-1 kubenswrapper[4740]: I1014 13:23:29.692496 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-metrics-client-certs\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692760 master-1 kubenswrapper[4740]: I1014 13:23:29.692537 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692760 master-1 kubenswrapper[4740]: I1014 13:23:29.692687 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-grpc-tls\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692760 master-1 kubenswrapper[4740]: I1014 13:23:29.692739 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-rulefiles-0\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692888 master-1 kubenswrapper[4740]: I1014 13:23:29.692771 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-metrics-client-ca\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692888 master-1 kubenswrapper[4740]: I1014 13:23:29.692801 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-kube-rbac-proxy\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.692888 master-1 kubenswrapper[4740]: I1014 13:23:29.692841 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-db\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.693019 master-1 kubenswrapper[4740]: I1014 13:23:29.692906 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-tls\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.693019 master-1 kubenswrapper[4740]: I1014 13:23:29.692962 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-config\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.693019 master-1 kubenswrapper[4740]: I1014 13:23:29.692986 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-serving-certs-ca-bundle\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.693019 master-1 kubenswrapper[4740]: I1014 13:23:29.693015 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-tls-assets\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.693178 master-1 kubenswrapper[4740]: I1014 13:23:29.693043 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"6539b776-6f11-4e9c-b195-cb354732ac2c\" (UID: \"6539b776-6f11-4e9c-b195-cb354732ac2c\") " Oct 14 13:23:29.694583 master-1 kubenswrapper[4740]: I1014 13:23:29.692789 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:29.694658 master-1 kubenswrapper[4740]: I1014 13:23:29.694501 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:29.695817 master-1 kubenswrapper[4740]: I1014 13:23:29.695768 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:29.696485 master-1 kubenswrapper[4740]: I1014 13:23:29.696434 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-kube-api-access-5bsn5" (OuterVolumeSpecName: "kube-api-access-5bsn5") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "kube-api-access-5bsn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:29.696998 master-1 kubenswrapper[4740]: I1014 13:23:29.696966 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:29.698117 master-1 kubenswrapper[4740]: I1014 13:23:29.698059 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.698202 master-1 kubenswrapper[4740]: I1014 13:23:29.698139 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-config-out" (OuterVolumeSpecName: "config-out") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:23:29.698445 master-1 kubenswrapper[4740]: I1014 13:23:29.698399 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.699352 master-1 kubenswrapper[4740]: I1014 13:23:29.699324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.699436 master-1 kubenswrapper[4740]: I1014 13:23:29.699418 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.699782 master-1 kubenswrapper[4740]: I1014 13:23:29.699750 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.699892 master-1 kubenswrapper[4740]: I1014 13:23:29.699805 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.701457 master-1 kubenswrapper[4740]: I1014 13:23:29.701397 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-config" (OuterVolumeSpecName: "config") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.701883 master-1 kubenswrapper[4740]: I1014 13:23:29.701835 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:29.702364 master-1 kubenswrapper[4740]: I1014 13:23:29.702315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.702601 master-1 kubenswrapper[4740]: I1014 13:23:29.702568 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:29.706266 master-1 kubenswrapper[4740]: I1014 13:23:29.706190 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:23:29.738844 master-1 kubenswrapper[4740]: I1014 13:23:29.738746 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-web-config" (OuterVolumeSpecName: "web-config") pod "6539b776-6f11-4e9c-b195-cb354732ac2c" (UID: "6539b776-6f11-4e9c-b195-cb354732ac2c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:29.794808 master-1 kubenswrapper[4740]: I1014 13:23:29.794734 4740 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-kubelet-serving-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.794808 master-1 kubenswrapper[4740]: I1014 13:23:29.794795 4740 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.794808 master-1 kubenswrapper[4740]: I1014 13:23:29.794815 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bsn5\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-kube-api-access-5bsn5\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794834 4740 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-web-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794853 4740 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-thanos-prometheus-http-client-file\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794869 4740 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-config-out\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794886 4740 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-metrics-client-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794905 4740 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794925 4740 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-grpc-tls\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794945 4740 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-metrics-client-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794962 4740 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-rulefiles-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.794978 4740 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-kube-rbac-proxy\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.795034 4740 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6539b776-6f11-4e9c-b195-cb354732ac2c-prometheus-k8s-db\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.795053 4740 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-tls\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.795069 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.795085 4740 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6539b776-6f11-4e9c-b195-cb354732ac2c-configmap-serving-certs-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795097 master-1 kubenswrapper[4740]: I1014 13:23:29.795101 4740 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6539b776-6f11-4e9c-b195-cb354732ac2c-tls-assets\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:29.795618 master-1 kubenswrapper[4740]: I1014 13:23:29.795125 4740 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6539b776-6f11-4e9c-b195-cb354732ac2c-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: I1014 13:23:30.213944 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:30.214066 master-1 kubenswrapper[4740]: I1014 13:23:30.214040 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:30.512385 master-1 kubenswrapper[4740]: I1014 13:23:30.512277 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-1" Oct 14 13:23:30.871829 master-1 kubenswrapper[4740]: I1014 13:23:30.871669 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 14 13:23:30.921975 master-1 kubenswrapper[4740]: I1014 13:23:30.921903 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-1"] Oct 14 13:23:30.955207 master-1 kubenswrapper[4740]: I1014 13:23:30.955122 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" path="/var/lib/kubelet/pods/6539b776-6f11-4e9c-b195-cb354732ac2c/volumes" Oct 14 13:23:31.780363 master-1 kubenswrapper[4740]: I1014 13:23:31.780271 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:31.781288 master-1 kubenswrapper[4740]: I1014 13:23:31.780389 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:31.785349 master-1 kubenswrapper[4740]: I1014 13:23:31.785296 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:31.932624 master-1 kubenswrapper[4740]: E1014 13:23:31.932535 4740 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2hutru8havafv: secret "metrics-server-2hutru8havafv" not found Oct 14 13:23:31.932965 master-1 kubenswrapper[4740]: E1014 13:23:31.932655 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle podName:fef43de0-1319-41d0-9ca4-d4795c56c459 nodeName:}" failed. No retries permitted until 2025-10-14 13:25:33.932629043 +0000 UTC m=+1159.742918412 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle") pod "metrics-server-8475fbcb68-p4n8s" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459") : secret "metrics-server-2hutru8havafv" not found Oct 14 13:23:32.528496 master-1 kubenswrapper[4740]: I1014 13:23:32.528433 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: I1014 13:23:32.813428 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:32.813557 master-1 kubenswrapper[4740]: I1014 13:23:32.813498 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: I1014 13:23:35.213166 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:35.213293 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:35.215991 master-1 kubenswrapper[4740]: I1014 13:23:35.213322 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: I1014 13:23:37.813155 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:37.813268 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:37.815283 master-1 kubenswrapper[4740]: I1014 13:23:37.815188 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: I1014 13:23:40.211861 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:40.211967 master-1 kubenswrapper[4740]: I1014 13:23:40.211956 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: I1014 13:23:42.815193 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:42.815713 master-1 kubenswrapper[4740]: I1014 13:23:42.815653 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:43.089411 master-1 kubenswrapper[4740]: I1014 13:23:43.089154 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" podUID="90f36641-2c8a-4c3f-83c6-3ff25d86d52e" containerName="oauth-openshift" containerID="cri-o://9b65a048ae7111360fb7f1062f39927fa58d6a586b76d6fe08a7abd7c74df1f4" gracePeriod=15 Oct 14 13:23:43.617507 master-1 kubenswrapper[4740]: I1014 13:23:43.617303 4740 generic.go:334] "Generic (PLEG): container finished" podID="90f36641-2c8a-4c3f-83c6-3ff25d86d52e" containerID="9b65a048ae7111360fb7f1062f39927fa58d6a586b76d6fe08a7abd7c74df1f4" exitCode=0 Oct 14 13:23:43.617632 master-1 kubenswrapper[4740]: I1014 13:23:43.617515 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" event={"ID":"90f36641-2c8a-4c3f-83c6-3ff25d86d52e","Type":"ContainerDied","Data":"9b65a048ae7111360fb7f1062f39927fa58d6a586b76d6fe08a7abd7c74df1f4"} Oct 14 13:23:43.617632 master-1 kubenswrapper[4740]: I1014 13:23:43.617554 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" event={"ID":"90f36641-2c8a-4c3f-83c6-3ff25d86d52e","Type":"ContainerDied","Data":"ad2501a5b6dfd9843afca7050825cc4de7b2bfbe4b1ad3bdf2add43879d1f231"} Oct 14 13:23:43.617632 master-1 kubenswrapper[4740]: I1014 13:23:43.617568 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad2501a5b6dfd9843afca7050825cc4de7b2bfbe4b1ad3bdf2add43879d1f231" Oct 14 13:23:43.648925 master-1 kubenswrapper[4740]: I1014 13:23:43.648868 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:23:43.821500 master-1 kubenswrapper[4740]: I1014 13:23:43.821396 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-error\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822183 master-1 kubenswrapper[4740]: I1014 13:23:43.821563 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbdh9\" (UniqueName: \"kubernetes.io/projected/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-kube-api-access-nbdh9\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822183 master-1 kubenswrapper[4740]: I1014 13:23:43.821673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-ocp-branding-template\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822183 master-1 kubenswrapper[4740]: I1014 13:23:43.821769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-policies\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822183 master-1 kubenswrapper[4740]: I1014 13:23:43.821858 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-service-ca\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822183 master-1 kubenswrapper[4740]: I1014 13:23:43.821950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-serving-cert\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822183 master-1 kubenswrapper[4740]: I1014 13:23:43.822125 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-cliconfig\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822611 master-1 kubenswrapper[4740]: I1014 13:23:43.822169 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-login\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822611 master-1 kubenswrapper[4740]: I1014 13:23:43.822377 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-session\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822611 master-1 kubenswrapper[4740]: I1014 13:23:43.822490 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-provider-selection\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822804 master-1 kubenswrapper[4740]: I1014 13:23:43.822685 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-dir\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.822988 master-1 kubenswrapper[4740]: I1014 13:23:43.822875 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-trusted-ca-bundle\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.823135 master-1 kubenswrapper[4740]: I1014 13:23:43.823051 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:43.823218 master-1 kubenswrapper[4740]: I1014 13:23:43.823095 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:23:43.823554 master-1 kubenswrapper[4740]: I1014 13:23:43.823498 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:43.824113 master-1 kubenswrapper[4740]: I1014 13:23:43.824041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:43.825009 master-1 kubenswrapper[4740]: I1014 13:23:43.824948 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:43.825306 master-1 kubenswrapper[4740]: I1014 13:23:43.825189 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-router-certs\") pod \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\" (UID: \"90f36641-2c8a-4c3f-83c6-3ff25d86d52e\") " Oct 14 13:23:43.827641 master-1 kubenswrapper[4740]: I1014 13:23:43.827595 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.827739 master-1 kubenswrapper[4740]: I1014 13:23:43.827712 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.827831 master-1 kubenswrapper[4740]: I1014 13:23:43.827748 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.827892 master-1 kubenswrapper[4740]: I1014 13:23:43.827843 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-cliconfig\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.827960 master-1 kubenswrapper[4740]: I1014 13:23:43.827927 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.828571 master-1 kubenswrapper[4740]: I1014 13:23:43.828509 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.828571 master-1 kubenswrapper[4740]: I1014 13:23:43.828447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-kube-api-access-nbdh9" (OuterVolumeSpecName: "kube-api-access-nbdh9") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "kube-api-access-nbdh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:43.828697 master-1 kubenswrapper[4740]: I1014 13:23:43.828585 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.830542 master-1 kubenswrapper[4740]: I1014 13:23:43.830487 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.830542 master-1 kubenswrapper[4740]: I1014 13:23:43.830407 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.831344 master-1 kubenswrapper[4740]: I1014 13:23:43.831302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.831822 master-1 kubenswrapper[4740]: I1014 13:23:43.831686 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.832302 master-1 kubenswrapper[4740]: I1014 13:23:43.832253 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "90f36641-2c8a-4c3f-83c6-3ff25d86d52e" (UID: "90f36641-2c8a-4c3f-83c6-3ff25d86d52e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:43.930416 master-1 kubenswrapper[4740]: I1014 13:23:43.930264 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-session\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930416 master-1 kubenswrapper[4740]: I1014 13:23:43.930401 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-provider-selection\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930416 master-1 kubenswrapper[4740]: I1014 13:23:43.930427 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-router-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930921 master-1 kubenswrapper[4740]: I1014 13:23:43.930449 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-error\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930921 master-1 kubenswrapper[4740]: I1014 13:23:43.930471 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbdh9\" (UniqueName: \"kubernetes.io/projected/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-kube-api-access-nbdh9\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930921 master-1 kubenswrapper[4740]: I1014 13:23:43.930490 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-ocp-branding-template\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930921 master-1 kubenswrapper[4740]: I1014 13:23:43.930512 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-system-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:43.930921 master-1 kubenswrapper[4740]: I1014 13:23:43.930531 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90f36641-2c8a-4c3f-83c6-3ff25d86d52e-v4-0-config-user-template-login\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:44.626977 master-1 kubenswrapper[4740]: I1014 13:23:44.626904 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf" Oct 14 13:23:44.677995 master-1 kubenswrapper[4740]: I1014 13:23:44.677945 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf"] Oct 14 13:23:44.687117 master-1 kubenswrapper[4740]: I1014 13:23:44.687051 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf"] Oct 14 13:23:44.955032 master-1 kubenswrapper[4740]: I1014 13:23:44.954762 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f36641-2c8a-4c3f-83c6-3ff25d86d52e" path="/var/lib/kubelet/pods/90f36641-2c8a-4c3f-83c6-3ff25d86d52e/volumes" Oct 14 13:23:45.206749 master-1 kubenswrapper[4740]: I1014 13:23:45.206585 4740 patch_prober.go:28] interesting pod/apiserver-7b6784d654-s9576 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Oct 14 13:23:45.206749 master-1 kubenswrapper[4740]: I1014 13:23:45.206675 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.75:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.75:8443: connect: connection refused" Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: I1014 13:23:45.616690 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:45.617351 master-1 kubenswrapper[4740]: I1014 13:23:45.616765 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:45.641330 master-1 kubenswrapper[4740]: I1014 13:23:45.641262 4740 generic.go:334] "Generic (PLEG): container finished" podID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerID="a239b7f63812583aa918ecca92d78715042d5630c3b5d976852ccf0f81559882" exitCode=0 Oct 14 13:23:45.641330 master-1 kubenswrapper[4740]: I1014 13:23:45.641308 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" event={"ID":"6492175e-e529-4b83-a4f0-45c7a30f7a86","Type":"ContainerDied","Data":"a239b7f63812583aa918ecca92d78715042d5630c3b5d976852ccf0f81559882"} Oct 14 13:23:45.897986 master-1 kubenswrapper[4740]: I1014 13:23:45.897926 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:23:46.057622 master-1 kubenswrapper[4740]: I1014 13:23:46.057567 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-dir\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-encryption-config\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057721 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-serving-cert\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057788 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-trusted-ca-bundle\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057864 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-serving-ca\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057910 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px7x9\" (UniqueName: \"kubernetes.io/projected/6492175e-e529-4b83-a4f0-45c7a30f7a86-kube-api-access-px7x9\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-client\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058279 master-1 kubenswrapper[4740]: I1014 13:23:46.057966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-policies\") pod \"6492175e-e529-4b83-a4f0-45c7a30f7a86\" (UID: \"6492175e-e529-4b83-a4f0-45c7a30f7a86\") " Oct 14 13:23:46.058671 master-1 kubenswrapper[4740]: I1014 13:23:46.058418 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:23:46.060051 master-1 kubenswrapper[4740]: I1014 13:23:46.059989 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:46.060051 master-1 kubenswrapper[4740]: I1014 13:23:46.060017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:46.060367 master-1 kubenswrapper[4740]: I1014 13:23:46.059696 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:46.061967 master-1 kubenswrapper[4740]: I1014 13:23:46.061930 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:46.062162 master-1 kubenswrapper[4740]: I1014 13:23:46.062100 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6492175e-e529-4b83-a4f0-45c7a30f7a86-kube-api-access-px7x9" (OuterVolumeSpecName: "kube-api-access-px7x9") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "kube-api-access-px7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:46.084844 master-1 kubenswrapper[4740]: I1014 13:23:46.084805 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:46.087813 master-1 kubenswrapper[4740]: I1014 13:23:46.087768 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6492175e-e529-4b83-a4f0-45c7a30f7a86" (UID: "6492175e-e529-4b83-a4f0-45c7a30f7a86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:46.159954 master-1 kubenswrapper[4740]: I1014 13:23:46.159921 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160145 master-1 kubenswrapper[4740]: I1014 13:23:46.160128 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px7x9\" (UniqueName: \"kubernetes.io/projected/6492175e-e529-4b83-a4f0-45c7a30f7a86-kube-api-access-px7x9\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160292 master-1 kubenswrapper[4740]: I1014 13:23:46.160275 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160392 master-1 kubenswrapper[4740]: I1014 13:23:46.160378 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160480 master-1 kubenswrapper[4740]: I1014 13:23:46.160467 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6492175e-e529-4b83-a4f0-45c7a30f7a86-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160562 master-1 kubenswrapper[4740]: I1014 13:23:46.160549 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160642 master-1 kubenswrapper[4740]: I1014 13:23:46.160629 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6492175e-e529-4b83-a4f0-45c7a30f7a86-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.160722 master-1 kubenswrapper[4740]: I1014 13:23:46.160710 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6492175e-e529-4b83-a4f0-45c7a30f7a86-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:23:46.652772 master-1 kubenswrapper[4740]: I1014 13:23:46.652679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" event={"ID":"6492175e-e529-4b83-a4f0-45c7a30f7a86","Type":"ContainerDied","Data":"116681c06662a5af31c4acc21e9356b554a14ae7ef5a59262361b356e94a29dc"} Oct 14 13:23:46.653002 master-1 kubenswrapper[4740]: I1014 13:23:46.652770 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b6784d654-s9576" Oct 14 13:23:46.653002 master-1 kubenswrapper[4740]: I1014 13:23:46.652808 4740 scope.go:117] "RemoveContainer" containerID="a239b7f63812583aa918ecca92d78715042d5630c3b5d976852ccf0f81559882" Oct 14 13:23:46.718416 master-1 kubenswrapper[4740]: I1014 13:23:46.718328 4740 scope.go:117] "RemoveContainer" containerID="46f87f50cd13f9281fb5bdb324b3969bf2687cbf6d1e1e8e755a253c6f2d276c" Oct 14 13:23:46.741588 master-1 kubenswrapper[4740]: I1014 13:23:46.741509 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-s9576"] Oct 14 13:23:46.759073 master-1 kubenswrapper[4740]: I1014 13:23:46.758967 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-s9576"] Oct 14 13:23:46.959292 master-1 kubenswrapper[4740]: I1014 13:23:46.959146 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" path="/var/lib/kubelet/pods/6492175e-e529-4b83-a4f0-45c7a30f7a86/volumes" Oct 14 13:23:47.381891 master-1 kubenswrapper[4740]: I1014 13:23:47.381810 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65687bc9c8-h4cd4"] Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382077 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="init-config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382093 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="init-config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382111 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="prometheus" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382119 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="prometheus" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382131 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382140 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382154 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="init-config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382162 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="init-config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382173 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="prom-label-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382182 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="prom-label-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382195 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="thanos-sidecar" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382203 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="thanos-sidecar" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382213 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382221 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382248 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-metric" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382257 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-metric" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382274 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382282 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382297 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f36641-2c8a-4c3f-83c6-3ff25d86d52e" containerName="oauth-openshift" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382306 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f36641-2c8a-4c3f-83c6-3ff25d86d52e" containerName="oauth-openshift" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382317 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382325 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="config-reloader" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382336 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-thanos" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382344 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-thanos" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382357 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382365 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382374 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-web" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382382 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-web" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382393 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-web" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382401 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-web" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382413 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="fix-audit-permissions" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382421 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="fix-audit-permissions" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: E1014 13:23:47.382431 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="alertmanager" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382439 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="alertmanager" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382568 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="alertmanager" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382582 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="prom-label-proxy" Oct 14 13:23:47.382674 master-1 kubenswrapper[4740]: I1014 13:23:47.382600 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-metric" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382610 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6492175e-e529-4b83-a4f0-45c7a30f7a86" containerName="oauth-apiserver" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382620 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382631 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-web" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382644 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382654 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="thanos-sidecar" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382668 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f36641-2c8a-4c3f-83c6-3ff25d86d52e" containerName="oauth-openshift" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382677 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="config-reloader" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382688 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="prometheus" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382702 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e010854-ec42-42d1-8865-0fe4c78214ef" containerName="kube-rbac-proxy-web" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382713 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="kube-rbac-proxy-thanos" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.382725 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539b776-6f11-4e9c-b195-cb354732ac2c" containerName="config-reloader" Oct 14 13:23:47.384708 master-1 kubenswrapper[4740]: I1014 13:23:47.383260 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.386341 master-1 kubenswrapper[4740]: I1014 13:23:47.386276 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 13:23:47.386423 master-1 kubenswrapper[4740]: I1014 13:23:47.386311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 13:23:47.388109 master-1 kubenswrapper[4740]: I1014 13:23:47.388067 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 14 13:23:47.388221 master-1 kubenswrapper[4740]: I1014 13:23:47.388160 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 13:23:47.388314 master-1 kubenswrapper[4740]: I1014 13:23:47.388298 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 13:23:47.388623 master-1 kubenswrapper[4740]: I1014 13:23:47.388595 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 14 13:23:47.388722 master-1 kubenswrapper[4740]: I1014 13:23:47.388625 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 13:23:47.388791 master-1 kubenswrapper[4740]: I1014 13:23:47.388766 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 14 13:23:47.388897 master-1 kubenswrapper[4740]: I1014 13:23:47.388854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-5vqgl" Oct 14 13:23:47.389894 master-1 kubenswrapper[4740]: I1014 13:23:47.389848 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 14 13:23:47.389996 master-1 kubenswrapper[4740]: I1014 13:23:47.389862 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 13:23:47.390125 master-1 kubenswrapper[4740]: I1014 13:23:47.390086 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 14 13:23:47.402464 master-1 kubenswrapper[4740]: I1014 13:23:47.402391 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 14 13:23:47.406019 master-1 kubenswrapper[4740]: I1014 13:23:47.405925 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65687bc9c8-h4cd4"] Oct 14 13:23:47.411835 master-1 kubenswrapper[4740]: I1014 13:23:47.411766 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 13:23:47.487769 master-1 kubenswrapper[4740]: I1014 13:23:47.487681 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.488049 master-1 kubenswrapper[4740]: I1014 13:23:47.487776 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.488572 master-1 kubenswrapper[4740]: I1014 13:23:47.488507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-login\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.488638 master-1 kubenswrapper[4740]: I1014 13:23:47.488598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.488816 master-1 kubenswrapper[4740]: I1014 13:23:47.488724 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.488816 master-1 kubenswrapper[4740]: I1014 13:23:47.488793 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-session\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.488925 master-1 kubenswrapper[4740]: I1014 13:23:47.488823 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-audit-dir\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.489047 master-1 kubenswrapper[4740]: I1014 13:23:47.489005 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-router-certs\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.489110 master-1 kubenswrapper[4740]: I1014 13:23:47.489053 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-audit-policies\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.489169 master-1 kubenswrapper[4740]: I1014 13:23:47.489154 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-error\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.489212 master-1 kubenswrapper[4740]: I1014 13:23:47.489197 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7929k\" (UniqueName: \"kubernetes.io/projected/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-kube-api-access-7929k\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.489283 master-1 kubenswrapper[4740]: I1014 13:23:47.489259 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-service-ca\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.489327 master-1 kubenswrapper[4740]: I1014 13:23:47.489282 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.590913 master-1 kubenswrapper[4740]: I1014 13:23:47.590833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.590913 master-1 kubenswrapper[4740]: I1014 13:23:47.590912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-login\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591207 master-1 kubenswrapper[4740]: I1014 13:23:47.590956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591207 master-1 kubenswrapper[4740]: I1014 13:23:47.590989 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591207 master-1 kubenswrapper[4740]: I1014 13:23:47.591018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-session\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591207 master-1 kubenswrapper[4740]: I1014 13:23:47.591057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-audit-dir\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591207 master-1 kubenswrapper[4740]: I1014 13:23:47.591136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-router-certs\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591207 master-1 kubenswrapper[4740]: I1014 13:23:47.591179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-audit-policies\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591509 master-1 kubenswrapper[4740]: I1014 13:23:47.591215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-error\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591509 master-1 kubenswrapper[4740]: I1014 13:23:47.591282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7929k\" (UniqueName: \"kubernetes.io/projected/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-kube-api-access-7929k\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591509 master-1 kubenswrapper[4740]: I1014 13:23:47.591317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591509 master-1 kubenswrapper[4740]: I1014 13:23:47.591331 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-audit-dir\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591509 master-1 kubenswrapper[4740]: I1014 13:23:47.591347 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-service-ca\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.591509 master-1 kubenswrapper[4740]: I1014 13:23:47.591471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.592618 master-1 kubenswrapper[4740]: I1014 13:23:47.592447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-audit-policies\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.592780 master-1 kubenswrapper[4740]: I1014 13:23:47.592741 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-service-ca\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.592867 master-1 kubenswrapper[4740]: I1014 13:23:47.592794 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.593330 master-1 kubenswrapper[4740]: I1014 13:23:47.593219 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.594829 master-1 kubenswrapper[4740]: I1014 13:23:47.594763 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-error\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.595313 master-1 kubenswrapper[4740]: I1014 13:23:47.595269 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-login\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.595704 master-1 kubenswrapper[4740]: I1014 13:23:47.595648 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.596611 master-1 kubenswrapper[4740]: I1014 13:23:47.596568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-session\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.597310 master-1 kubenswrapper[4740]: I1014 13:23:47.597254 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.597681 master-1 kubenswrapper[4740]: I1014 13:23:47.597633 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-system-router-certs\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.597799 master-1 kubenswrapper[4740]: I1014 13:23:47.597772 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.621015 master-1 kubenswrapper[4740]: I1014 13:23:47.620979 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7929k\" (UniqueName: \"kubernetes.io/projected/442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c-kube-api-access-7929k\") pod \"oauth-openshift-65687bc9c8-h4cd4\" (UID: \"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c\") " pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.711144 master-1 kubenswrapper[4740]: I1014 13:23:47.710684 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: I1014 13:23:47.812405 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:23:47.812499 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:23:47.813753 master-1 kubenswrapper[4740]: I1014 13:23:47.812529 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:23:48.213497 master-1 kubenswrapper[4740]: I1014 13:23:48.213418 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65687bc9c8-h4cd4"] Oct 14 13:23:48.219200 master-1 kubenswrapper[4740]: W1014 13:23:48.219115 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod442bd7e6_9cc3_4dc0_8d51_6f04492f2b5c.slice/crio-197f2abb27098dac5c3b29f5409ef7ff66d817422bc8a498f45945880c7acc05 WatchSource:0}: Error finding container 197f2abb27098dac5c3b29f5409ef7ff66d817422bc8a498f45945880c7acc05: Status 404 returned error can't find the container with id 197f2abb27098dac5c3b29f5409ef7ff66d817422bc8a498f45945880c7acc05 Oct 14 13:23:48.680284 master-1 kubenswrapper[4740]: I1014 13:23:48.676663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" event={"ID":"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c","Type":"ContainerStarted","Data":"370a66c474a366f5ec076692e1557ff09a5c26a9f3a742f7ea92f14e9560698d"} Oct 14 13:23:48.680284 master-1 kubenswrapper[4740]: I1014 13:23:48.676731 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" event={"ID":"442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c","Type":"ContainerStarted","Data":"197f2abb27098dac5c3b29f5409ef7ff66d817422bc8a498f45945880c7acc05"} Oct 14 13:23:48.680284 master-1 kubenswrapper[4740]: I1014 13:23:48.678354 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:48.725098 master-1 kubenswrapper[4740]: I1014 13:23:48.724993 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" podStartSLOduration=30.72497247 podStartE2EDuration="30.72497247s" podCreationTimestamp="2025-10-14 13:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:23:48.722005082 +0000 UTC m=+1054.532294431" watchObservedRunningTime="2025-10-14 13:23:48.72497247 +0000 UTC m=+1054.535261809" Oct 14 13:23:48.971573 master-1 kubenswrapper[4740]: I1014 13:23:48.971476 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65687bc9c8-h4cd4" Oct 14 13:23:52.808518 master-1 kubenswrapper[4740]: I1014 13:23:52.808382 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:23:52.808518 master-1 kubenswrapper[4740]: I1014 13:23:52.808495 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:23:54.930180 master-1 kubenswrapper[4740]: I1014 13:23:54.930084 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-1"] Oct 14 13:23:54.931940 master-1 kubenswrapper[4740]: I1014 13:23:54.931884 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:54.937948 master-1 kubenswrapper[4740]: I1014 13:23:54.937885 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-sdwrm" Oct 14 13:23:54.957921 master-1 kubenswrapper[4740]: I1014 13:23:54.957848 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-1"] Oct 14 13:23:55.125712 master-1 kubenswrapper[4740]: I1014 13:23:55.125577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kube-api-access\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.126078 master-1 kubenswrapper[4740]: I1014 13:23:55.125745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-var-lock\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.126078 master-1 kubenswrapper[4740]: I1014 13:23:55.125820 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.227104 master-1 kubenswrapper[4740]: I1014 13:23:55.226894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.227104 master-1 kubenswrapper[4740]: I1014 13:23:55.227057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kube-api-access\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.227518 master-1 kubenswrapper[4740]: I1014 13:23:55.227105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.227679 master-1 kubenswrapper[4740]: I1014 13:23:55.227615 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-var-lock\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.227807 master-1 kubenswrapper[4740]: I1014 13:23:55.227699 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-var-lock\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.253386 master-1 kubenswrapper[4740]: I1014 13:23:55.253277 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kube-api-access\") pod \"installer-6-master-1\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.256314 master-1 kubenswrapper[4740]: I1014 13:23:55.256237 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:23:55.396325 master-1 kubenswrapper[4740]: I1014 13:23:55.396089 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz"] Oct 14 13:23:55.414685 master-1 kubenswrapper[4740]: I1014 13:23:55.414614 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.417910 master-1 kubenswrapper[4740]: I1014 13:23:55.417876 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 13:23:55.418040 master-1 kubenswrapper[4740]: I1014 13:23:55.417924 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 13:23:55.418263 master-1 kubenswrapper[4740]: I1014 13:23:55.418214 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 13:23:55.418361 master-1 kubenswrapper[4740]: I1014 13:23:55.418295 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 13:23:55.418468 master-1 kubenswrapper[4740]: I1014 13:23:55.418248 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 13:23:55.418582 master-1 kubenswrapper[4740]: I1014 13:23:55.418503 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 13:23:55.418640 master-1 kubenswrapper[4740]: I1014 13:23:55.418614 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-8gpjk" Oct 14 13:23:55.418882 master-1 kubenswrapper[4740]: I1014 13:23:55.418823 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 13:23:55.419052 master-1 kubenswrapper[4740]: I1014 13:23:55.419011 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 13:23:55.420704 master-1 kubenswrapper[4740]: I1014 13:23:55.420654 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz"] Oct 14 13:23:55.533671 master-1 kubenswrapper[4740]: I1014 13:23:55.533614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-encryption-config\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.533945 master-1 kubenswrapper[4740]: I1014 13:23:55.533711 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-serving-ca\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.533945 master-1 kubenswrapper[4740]: I1014 13:23:55.533740 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-serving-cert\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.533945 master-1 kubenswrapper[4740]: I1014 13:23:55.533771 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-dir\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.533945 master-1 kubenswrapper[4740]: I1014 13:23:55.533794 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-policies\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.533945 master-1 kubenswrapper[4740]: I1014 13:23:55.533900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-trusted-ca-bundle\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.534177 master-1 kubenswrapper[4740]: I1014 13:23:55.533999 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-client\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.534177 master-1 kubenswrapper[4740]: I1014 13:23:55.534164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4qh\" (UniqueName: \"kubernetes.io/projected/d5a933b7-cba6-4bb3-9529-918d06be4da7-kube-api-access-rf4qh\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf4qh\" (UniqueName: \"kubernetes.io/projected/d5a933b7-cba6-4bb3-9529-918d06be4da7-kube-api-access-rf4qh\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-encryption-config\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636121 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-serving-ca\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636143 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-serving-cert\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636199 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-dir\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-policies\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636320 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-trusted-ca-bundle\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-client\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.636911 master-1 kubenswrapper[4740]: I1014 13:23:55.636521 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-dir\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.638198 master-1 kubenswrapper[4740]: I1014 13:23:55.638104 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-trusted-ca-bundle\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.638330 master-1 kubenswrapper[4740]: I1014 13:23:55.638189 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-serving-ca\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.638573 master-1 kubenswrapper[4740]: I1014 13:23:55.638506 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-policies\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.641065 master-1 kubenswrapper[4740]: I1014 13:23:55.641007 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-serving-cert\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.642525 master-1 kubenswrapper[4740]: I1014 13:23:55.642476 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-client\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.644282 master-1 kubenswrapper[4740]: I1014 13:23:55.644250 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-encryption-config\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.666040 master-1 kubenswrapper[4740]: I1014 13:23:55.665971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf4qh\" (UniqueName: \"kubernetes.io/projected/d5a933b7-cba6-4bb3-9529-918d06be4da7-kube-api-access-rf4qh\") pod \"apiserver-84c8b8d745-j8fqz\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.736384 master-1 kubenswrapper[4740]: I1014 13:23:55.736318 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:23:55.758378 master-1 kubenswrapper[4740]: I1014 13:23:55.758300 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-1"] Oct 14 13:23:55.762606 master-1 kubenswrapper[4740]: W1014 13:23:55.762543 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4637e3ab_bce1_4ea4_b61f_2e7e201e8943.slice/crio-33b6d5ec85a574725180b6743a05c6bb45afb38ad96ac368e4225a68a6ec8478 WatchSource:0}: Error finding container 33b6d5ec85a574725180b6743a05c6bb45afb38ad96ac368e4225a68a6ec8478: Status 404 returned error can't find the container with id 33b6d5ec85a574725180b6743a05c6bb45afb38ad96ac368e4225a68a6ec8478 Oct 14 13:23:56.180090 master-1 kubenswrapper[4740]: I1014 13:23:56.179974 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz"] Oct 14 13:23:56.193256 master-1 kubenswrapper[4740]: W1014 13:23:56.193174 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5a933b7_cba6_4bb3_9529_918d06be4da7.slice/crio-d56e9fef9fbecfda134ec0e5c15a1d4b21911a3ca69b963035ea391519bf2368 WatchSource:0}: Error finding container d56e9fef9fbecfda134ec0e5c15a1d4b21911a3ca69b963035ea391519bf2368: Status 404 returned error can't find the container with id d56e9fef9fbecfda134ec0e5c15a1d4b21911a3ca69b963035ea391519bf2368 Oct 14 13:23:56.751866 master-1 kubenswrapper[4740]: I1014 13:23:56.751770 4740 generic.go:334] "Generic (PLEG): container finished" podID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerID="873fa7a6daf094c261cd142cbf648252d7f1dacb06fa63c2b1dfc1d8529c4c70" exitCode=0 Oct 14 13:23:56.752278 master-1 kubenswrapper[4740]: I1014 13:23:56.751909 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" event={"ID":"d5a933b7-cba6-4bb3-9529-918d06be4da7","Type":"ContainerDied","Data":"873fa7a6daf094c261cd142cbf648252d7f1dacb06fa63c2b1dfc1d8529c4c70"} Oct 14 13:23:56.752278 master-1 kubenswrapper[4740]: I1014 13:23:56.751979 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" event={"ID":"d5a933b7-cba6-4bb3-9529-918d06be4da7","Type":"ContainerStarted","Data":"d56e9fef9fbecfda134ec0e5c15a1d4b21911a3ca69b963035ea391519bf2368"} Oct 14 13:23:56.757014 master-1 kubenswrapper[4740]: I1014 13:23:56.754845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"4637e3ab-bce1-4ea4-b61f-2e7e201e8943","Type":"ContainerStarted","Data":"0b349557939e22993cc4ffa4b3fd75e964d3da863bc3adf912202ced23db7fad"} Oct 14 13:23:56.757014 master-1 kubenswrapper[4740]: I1014 13:23:56.754893 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"4637e3ab-bce1-4ea4-b61f-2e7e201e8943","Type":"ContainerStarted","Data":"33b6d5ec85a574725180b6743a05c6bb45afb38ad96ac368e4225a68a6ec8478"} Oct 14 13:23:56.810483 master-1 kubenswrapper[4740]: I1014 13:23:56.810360 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-1" podStartSLOduration=2.810335199 podStartE2EDuration="2.810335199s" podCreationTimestamp="2025-10-14 13:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:23:56.807835632 +0000 UTC m=+1062.618124961" watchObservedRunningTime="2025-10-14 13:23:56.810335199 +0000 UTC m=+1062.620624528" Oct 14 13:23:57.764640 master-1 kubenswrapper[4740]: I1014 13:23:57.764553 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" event={"ID":"d5a933b7-cba6-4bb3-9529-918d06be4da7","Type":"ContainerStarted","Data":"f2e2740652494e2a8601bb964a94737bdc249abe23a6463336f3a8b42bda2bba"} Oct 14 13:23:57.807389 master-1 kubenswrapper[4740]: I1014 13:23:57.807313 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:23:57.807655 master-1 kubenswrapper[4740]: I1014 13:23:57.807412 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:23:57.810398 master-1 kubenswrapper[4740]: I1014 13:23:57.810296 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podStartSLOduration=64.810263785 podStartE2EDuration="1m4.810263785s" podCreationTimestamp="2025-10-14 13:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:23:57.807675346 +0000 UTC m=+1063.617964705" watchObservedRunningTime="2025-10-14 13:23:57.810263785 +0000 UTC m=+1063.620553154" Oct 14 13:24:00.736654 master-1 kubenswrapper[4740]: I1014 13:24:00.736578 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:24:00.736654 master-1 kubenswrapper[4740]: I1014 13:24:00.736652 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:24:00.749477 master-1 kubenswrapper[4740]: I1014 13:24:00.749398 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:24:00.804101 master-1 kubenswrapper[4740]: I1014 13:24:00.804034 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:24:02.807383 master-1 kubenswrapper[4740]: I1014 13:24:02.807314 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:02.808410 master-1 kubenswrapper[4740]: I1014 13:24:02.808342 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: I1014 13:24:05.612908 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:24:05.612982 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:24:05.613991 master-1 kubenswrapper[4740]: I1014 13:24:05.613000 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:24:07.807548 master-1 kubenswrapper[4740]: I1014 13:24:07.807436 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:07.808534 master-1 kubenswrapper[4740]: I1014 13:24:07.807605 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:12.807959 master-1 kubenswrapper[4740]: I1014 13:24:12.807871 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:12.809007 master-1 kubenswrapper[4740]: I1014 13:24:12.807959 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:17.808016 master-1 kubenswrapper[4740]: I1014 13:24:17.807911 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:17.808016 master-1 kubenswrapper[4740]: I1014 13:24:17.808029 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:21.722344 master-1 kubenswrapper[4740]: I1014 13:24:21.722224 4740 scope.go:117] "RemoveContainer" containerID="6c49b12e94298058c3fe7e52d9debfe9322d63d2cbb98a0a9d0c95aba6f944b3" Oct 14 13:24:22.807816 master-1 kubenswrapper[4740]: I1014 13:24:22.807697 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:22.808529 master-1 kubenswrapper[4740]: I1014 13:24:22.807812 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: I1014 13:24:25.616501 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:24:25.616601 master-1 kubenswrapper[4740]: I1014 13:24:25.616591 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:24:25.618372 master-1 kubenswrapper[4740]: I1014 13:24:25.616734 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:24:27.807542 master-1 kubenswrapper[4740]: I1014 13:24:27.807410 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:27.807542 master-1 kubenswrapper[4740]: I1014 13:24:27.807502 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:28.957278 master-1 kubenswrapper[4740]: I1014 13:24:28.957091 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:24:28.957278 master-1 kubenswrapper[4740]: I1014 13:24:28.957218 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: E1014 13:24:28.957551 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-cert-syncer" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.957571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-cert-syncer" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: E1014 13:24:28.957597 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1050094e1399d2efd697dc283130c5f7" containerName="cluster-policy-controller" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.957610 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1050094e1399d2efd697dc283130c5f7" containerName="cluster-policy-controller" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: E1014 13:24:28.957624 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-recovery-controller" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.957638 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-recovery-controller" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: E1014 13:24:28.957657 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.957670 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.957818 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager" containerID="cri-o://516862ae041aab7390f584c0cbf3cdf2154c45cbdb2591237446bb7d27696ed4" gracePeriod=30 Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.957966 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://84816b63a679d0da082379c16b62aec3006ff768247ca2c54217f373f103c8e1" gracePeriod=30 Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.958045 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-recovery-controller" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.958047 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="1050094e1399d2efd697dc283130c5f7" containerName="cluster-policy-controller" containerID="cri-o://54f46dc9ca357d24aa0d18e8d5db0aee69d6d73cc41e66f9af2ffdab2e4b7cc3" gracePeriod=30 Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.958076 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1050094e1399d2efd697dc283130c5f7" containerName="cluster-policy-controller" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.958102 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-cert-syncer" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.958122 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager" Oct 14 13:24:28.958482 master-1 kubenswrapper[4740]: I1014 13:24:28.958001 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="1050094e1399d2efd697dc283130c5f7" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://410d42ad1c03831b0b0e58b34e9c7c20fbce91f19d06aca1df997680840d4c82" gracePeriod=30 Oct 14 13:24:29.077099 master-1 kubenswrapper[4740]: I1014 13:24:29.077046 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b744ceb9fd177ab93c0e259b2c87faa0-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"b744ceb9fd177ab93c0e259b2c87faa0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.077099 master-1 kubenswrapper[4740]: I1014 13:24:29.077105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b744ceb9fd177ab93c0e259b2c87faa0-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"b744ceb9fd177ab93c0e259b2c87faa0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.081364 master-1 kubenswrapper[4740]: E1014 13:24:29.081323 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-conmon-54f46dc9ca357d24aa0d18e8d5db0aee69d6d73cc41e66f9af2ffdab2e4b7cc3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-conmon-410d42ad1c03831b0b0e58b34e9c7c20fbce91f19d06aca1df997680840d4c82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-conmon-516862ae041aab7390f584c0cbf3cdf2154c45cbdb2591237446bb7d27696ed4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-410d42ad1c03831b0b0e58b34e9c7c20fbce91f19d06aca1df997680840d4c82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-84816b63a679d0da082379c16b62aec3006ff768247ca2c54217f373f103c8e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-516862ae041aab7390f584c0cbf3cdf2154c45cbdb2591237446bb7d27696ed4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1050094e1399d2efd697dc283130c5f7.slice/crio-54f46dc9ca357d24aa0d18e8d5db0aee69d6d73cc41e66f9af2ffdab2e4b7cc3.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:24:29.147732 master-1 kubenswrapper[4740]: I1014 13:24:29.147662 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_1050094e1399d2efd697dc283130c5f7/kube-controller-manager-cert-syncer/0.log" Oct 14 13:24:29.148653 master-1 kubenswrapper[4740]: I1014 13:24:29.148631 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.154602 master-1 kubenswrapper[4740]: I1014 13:24:29.154580 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="1050094e1399d2efd697dc283130c5f7" podUID="b744ceb9fd177ab93c0e259b2c87faa0" Oct 14 13:24:29.178608 master-1 kubenswrapper[4740]: I1014 13:24:29.178502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b744ceb9fd177ab93c0e259b2c87faa0-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"b744ceb9fd177ab93c0e259b2c87faa0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.178811 master-1 kubenswrapper[4740]: I1014 13:24:29.178661 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b744ceb9fd177ab93c0e259b2c87faa0-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"b744ceb9fd177ab93c0e259b2c87faa0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.178869 master-1 kubenswrapper[4740]: I1014 13:24:29.178841 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b744ceb9fd177ab93c0e259b2c87faa0-cert-dir\") pod \"kube-controller-manager-master-1\" (UID: \"b744ceb9fd177ab93c0e259b2c87faa0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.178921 master-1 kubenswrapper[4740]: I1014 13:24:29.178911 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b744ceb9fd177ab93c0e259b2c87faa0-resource-dir\") pod \"kube-controller-manager-master-1\" (UID: \"b744ceb9fd177ab93c0e259b2c87faa0\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:29.279666 master-1 kubenswrapper[4740]: I1014 13:24:29.279574 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-cert-dir\") pod \"1050094e1399d2efd697dc283130c5f7\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " Oct 14 13:24:29.279666 master-1 kubenswrapper[4740]: I1014 13:24:29.279653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-resource-dir\") pod \"1050094e1399d2efd697dc283130c5f7\" (UID: \"1050094e1399d2efd697dc283130c5f7\") " Oct 14 13:24:29.280151 master-1 kubenswrapper[4740]: I1014 13:24:29.280124 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "1050094e1399d2efd697dc283130c5f7" (UID: "1050094e1399d2efd697dc283130c5f7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:29.280196 master-1 kubenswrapper[4740]: I1014 13:24:29.280175 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "1050094e1399d2efd697dc283130c5f7" (UID: "1050094e1399d2efd697dc283130c5f7"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:29.381931 master-1 kubenswrapper[4740]: I1014 13:24:29.381839 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:29.381931 master-1 kubenswrapper[4740]: I1014 13:24:29.381917 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1050094e1399d2efd697dc283130c5f7-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:30.027080 master-1 kubenswrapper[4740]: I1014 13:24:30.027024 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-1_1050094e1399d2efd697dc283130c5f7/kube-controller-manager-cert-syncer/0.log" Oct 14 13:24:30.029635 master-1 kubenswrapper[4740]: I1014 13:24:30.029568 4740 generic.go:334] "Generic (PLEG): container finished" podID="1050094e1399d2efd697dc283130c5f7" containerID="84816b63a679d0da082379c16b62aec3006ff768247ca2c54217f373f103c8e1" exitCode=0 Oct 14 13:24:30.029635 master-1 kubenswrapper[4740]: I1014 13:24:30.029631 4740 generic.go:334] "Generic (PLEG): container finished" podID="1050094e1399d2efd697dc283130c5f7" containerID="410d42ad1c03831b0b0e58b34e9c7c20fbce91f19d06aca1df997680840d4c82" exitCode=2 Oct 14 13:24:30.029897 master-1 kubenswrapper[4740]: I1014 13:24:30.029654 4740 generic.go:334] "Generic (PLEG): container finished" podID="1050094e1399d2efd697dc283130c5f7" containerID="54f46dc9ca357d24aa0d18e8d5db0aee69d6d73cc41e66f9af2ffdab2e4b7cc3" exitCode=0 Oct 14 13:24:30.029897 master-1 kubenswrapper[4740]: I1014 13:24:30.029699 4740 generic.go:334] "Generic (PLEG): container finished" podID="1050094e1399d2efd697dc283130c5f7" containerID="516862ae041aab7390f584c0cbf3cdf2154c45cbdb2591237446bb7d27696ed4" exitCode=0 Oct 14 13:24:30.029897 master-1 kubenswrapper[4740]: I1014 13:24:30.029658 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:30.029897 master-1 kubenswrapper[4740]: I1014 13:24:30.029841 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb8f2f44bcae1a186a655c93364d11e095a33cefe3d0fb53df6f97c9d907d695" Oct 14 13:24:30.032851 master-1 kubenswrapper[4740]: I1014 13:24:30.032770 4740 generic.go:334] "Generic (PLEG): container finished" podID="4637e3ab-bce1-4ea4-b61f-2e7e201e8943" containerID="0b349557939e22993cc4ffa4b3fd75e964d3da863bc3adf912202ced23db7fad" exitCode=0 Oct 14 13:24:30.033013 master-1 kubenswrapper[4740]: I1014 13:24:30.032849 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"4637e3ab-bce1-4ea4-b61f-2e7e201e8943","Type":"ContainerDied","Data":"0b349557939e22993cc4ffa4b3fd75e964d3da863bc3adf912202ced23db7fad"} Oct 14 13:24:30.039701 master-1 kubenswrapper[4740]: I1014 13:24:30.039628 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="1050094e1399d2efd697dc283130c5f7" podUID="b744ceb9fd177ab93c0e259b2c87faa0" Oct 14 13:24:30.082693 master-1 kubenswrapper[4740]: I1014 13:24:30.082478 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" oldPodUID="1050094e1399d2efd697dc283130c5f7" podUID="b744ceb9fd177ab93c0e259b2c87faa0" Oct 14 13:24:30.954513 master-1 kubenswrapper[4740]: I1014 13:24:30.954468 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1050094e1399d2efd697dc283130c5f7" path="/var/lib/kubelet/pods/1050094e1399d2efd697dc283130c5f7/volumes" Oct 14 13:24:31.421456 master-1 kubenswrapper[4740]: I1014 13:24:31.421406 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:24:31.518779 master-1 kubenswrapper[4740]: I1014 13:24:31.518676 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-var-lock\") pod \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " Oct 14 13:24:31.519015 master-1 kubenswrapper[4740]: I1014 13:24:31.518889 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kube-api-access\") pod \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " Oct 14 13:24:31.519015 master-1 kubenswrapper[4740]: I1014 13:24:31.519003 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kubelet-dir\") pod \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\" (UID: \"4637e3ab-bce1-4ea4-b61f-2e7e201e8943\") " Oct 14 13:24:31.519220 master-1 kubenswrapper[4740]: I1014 13:24:31.519174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-var-lock" (OuterVolumeSpecName: "var-lock") pod "4637e3ab-bce1-4ea4-b61f-2e7e201e8943" (UID: "4637e3ab-bce1-4ea4-b61f-2e7e201e8943"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:31.519457 master-1 kubenswrapper[4740]: I1014 13:24:31.519360 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4637e3ab-bce1-4ea4-b61f-2e7e201e8943" (UID: "4637e3ab-bce1-4ea4-b61f-2e7e201e8943"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:31.519899 master-1 kubenswrapper[4740]: I1014 13:24:31.519837 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:31.519982 master-1 kubenswrapper[4740]: I1014 13:24:31.519901 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:31.522740 master-1 kubenswrapper[4740]: I1014 13:24:31.522682 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4637e3ab-bce1-4ea4-b61f-2e7e201e8943" (UID: "4637e3ab-bce1-4ea4-b61f-2e7e201e8943"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:24:31.621150 master-1 kubenswrapper[4740]: I1014 13:24:31.620981 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4637e3ab-bce1-4ea4-b61f-2e7e201e8943-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:31.707389 master-1 kubenswrapper[4740]: I1014 13:24:31.707283 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:24:31.707389 master-1 kubenswrapper[4740]: I1014 13:24:31.707372 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:24:32.048890 master-1 kubenswrapper[4740]: I1014 13:24:32.048789 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-1" event={"ID":"4637e3ab-bce1-4ea4-b61f-2e7e201e8943","Type":"ContainerDied","Data":"33b6d5ec85a574725180b6743a05c6bb45afb38ad96ac368e4225a68a6ec8478"} Oct 14 13:24:32.048890 master-1 kubenswrapper[4740]: I1014 13:24:32.048858 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b6d5ec85a574725180b6743a05c6bb45afb38ad96ac368e4225a68a6ec8478" Oct 14 13:24:32.048890 master-1 kubenswrapper[4740]: I1014 13:24:32.048869 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-1" Oct 14 13:24:32.808284 master-1 kubenswrapper[4740]: I1014 13:24:32.808151 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:32.809145 master-1 kubenswrapper[4740]: I1014 13:24:32.808329 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:36.707145 master-1 kubenswrapper[4740]: I1014 13:24:36.707053 4740 patch_prober.go:28] interesting pod/kube-controller-manager-guard-master-1 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" start-of-body= Oct 14 13:24:36.707145 master-1 kubenswrapper[4740]: I1014 13:24:36.707128 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-1" podUID="87a988d8-ed78-4396-a4fa-d856ff93860f" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:10257/healthz\": dial tcp 192.168.34.11:10257: connect: connection refused" Oct 14 13:24:37.807868 master-1 kubenswrapper[4740]: I1014 13:24:37.807772 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:37.807868 master-1 kubenswrapper[4740]: I1014 13:24:37.807841 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:39.943940 master-1 kubenswrapper[4740]: I1014 13:24:39.943837 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:39.977554 master-1 kubenswrapper[4740]: I1014 13:24:39.977484 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="86e8315f-c636-4c85-a142-66db47581390" Oct 14 13:24:39.977554 master-1 kubenswrapper[4740]: I1014 13:24:39.977543 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podUID="86e8315f-c636-4c85-a142-66db47581390" Oct 14 13:24:40.000015 master-1 kubenswrapper[4740]: I1014 13:24:39.999960 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:40.000626 master-1 kubenswrapper[4740]: I1014 13:24:40.000600 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:24:40.006512 master-1 kubenswrapper[4740]: I1014 13:24:40.006447 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:24:40.021252 master-1 kubenswrapper[4740]: I1014 13:24:40.021196 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:40.032323 master-1 kubenswrapper[4740]: I1014 13:24:40.032287 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-1"] Oct 14 13:24:40.044971 master-1 kubenswrapper[4740]: W1014 13:24:40.044935 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb744ceb9fd177ab93c0e259b2c87faa0.slice/crio-f420d20b6ce84c2387b95aa928b6b07195bcd638db7fa2ae4fb09c52533d0a74 WatchSource:0}: Error finding container f420d20b6ce84c2387b95aa928b6b07195bcd638db7fa2ae4fb09c52533d0a74: Status 404 returned error can't find the container with id f420d20b6ce84c2387b95aa928b6b07195bcd638db7fa2ae4fb09c52533d0a74 Oct 14 13:24:40.125133 master-1 kubenswrapper[4740]: I1014 13:24:40.125053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"b744ceb9fd177ab93c0e259b2c87faa0","Type":"ContainerStarted","Data":"f420d20b6ce84c2387b95aa928b6b07195bcd638db7fa2ae4fb09c52533d0a74"} Oct 14 13:24:40.613763 master-1 kubenswrapper[4740]: I1014 13:24:40.613602 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-8-master-1"] Oct 14 13:24:40.614350 master-1 kubenswrapper[4740]: E1014 13:24:40.614137 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4637e3ab-bce1-4ea4-b61f-2e7e201e8943" containerName="installer" Oct 14 13:24:40.614350 master-1 kubenswrapper[4740]: I1014 13:24:40.614170 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4637e3ab-bce1-4ea4-b61f-2e7e201e8943" containerName="installer" Oct 14 13:24:40.614606 master-1 kubenswrapper[4740]: I1014 13:24:40.614550 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4637e3ab-bce1-4ea4-b61f-2e7e201e8943" containerName="installer" Oct 14 13:24:40.616220 master-1 kubenswrapper[4740]: I1014 13:24:40.616157 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.624167 master-1 kubenswrapper[4740]: I1014 13:24:40.624047 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbs2c" Oct 14 13:24:40.625986 master-1 kubenswrapper[4740]: I1014 13:24:40.625939 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-8-master-1"] Oct 14 13:24:40.767001 master-1 kubenswrapper[4740]: I1014 13:24:40.766837 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-kubelet-dir\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.767001 master-1 kubenswrapper[4740]: I1014 13:24:40.766949 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c68b53-ad48-4681-9146-e0221d3f080e-kube-api-access\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.767274 master-1 kubenswrapper[4740]: I1014 13:24:40.767056 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-var-lock\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.868920 master-1 kubenswrapper[4740]: I1014 13:24:40.868295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c68b53-ad48-4681-9146-e0221d3f080e-kube-api-access\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.869193 master-1 kubenswrapper[4740]: I1014 13:24:40.869123 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-var-lock\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.869496 master-1 kubenswrapper[4740]: I1014 13:24:40.869338 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-kubelet-dir\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.869605 master-1 kubenswrapper[4740]: I1014 13:24:40.869263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-var-lock\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.869645 master-1 kubenswrapper[4740]: I1014 13:24:40.869452 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-kubelet-dir\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.905561 master-1 kubenswrapper[4740]: I1014 13:24:40.905483 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c68b53-ad48-4681-9146-e0221d3f080e-kube-api-access\") pod \"installer-8-master-1\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:40.996331 master-1 kubenswrapper[4740]: I1014 13:24:40.996106 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-1" Oct 14 13:24:41.157295 master-1 kubenswrapper[4740]: I1014 13:24:41.157170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"b744ceb9fd177ab93c0e259b2c87faa0","Type":"ContainerStarted","Data":"62d472cf4fa1dd5417721aa2f6894f82006e094618e045fe83f0ea8eba180654"} Oct 14 13:24:41.157295 master-1 kubenswrapper[4740]: I1014 13:24:41.157211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"b744ceb9fd177ab93c0e259b2c87faa0","Type":"ContainerStarted","Data":"9d7087a9970407e1f72b42adf96247aebfb82263b399a1ed4bd510c5b81446e0"} Oct 14 13:24:41.157295 master-1 kubenswrapper[4740]: I1014 13:24:41.157220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"b744ceb9fd177ab93c0e259b2c87faa0","Type":"ContainerStarted","Data":"9452c915aaef5a5466b7d58622fb26b7bae1112fb8e77d37c5e6dd80efe07dd9"} Oct 14 13:24:41.431811 master-1 kubenswrapper[4740]: I1014 13:24:41.431769 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-8-master-1"] Oct 14 13:24:42.169570 master-1 kubenswrapper[4740]: I1014 13:24:42.169462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" event={"ID":"b744ceb9fd177ab93c0e259b2c87faa0","Type":"ContainerStarted","Data":"67c949b7bb9a0490f40df2daa03c33502b7413e908579017a230213e51c35b85"} Oct 14 13:24:42.172324 master-1 kubenswrapper[4740]: I1014 13:24:42.172215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-1" event={"ID":"26c68b53-ad48-4681-9146-e0221d3f080e","Type":"ContainerStarted","Data":"b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24"} Oct 14 13:24:42.172324 master-1 kubenswrapper[4740]: I1014 13:24:42.172319 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-1" event={"ID":"26c68b53-ad48-4681-9146-e0221d3f080e","Type":"ContainerStarted","Data":"e55d5407ff1cbb4be6373351e7cd0b629505627ac4d38c22e32433d1ef2e91ca"} Oct 14 13:24:42.234510 master-1 kubenswrapper[4740]: I1014 13:24:42.232570 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" podStartSLOduration=2.232544956 podStartE2EDuration="2.232544956s" podCreationTimestamp="2025-10-14 13:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:24:42.212436282 +0000 UTC m=+1108.022725611" watchObservedRunningTime="2025-10-14 13:24:42.232544956 +0000 UTC m=+1108.042834295" Oct 14 13:24:42.236649 master-1 kubenswrapper[4740]: I1014 13:24:42.235706 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-8-master-1" podStartSLOduration=2.235696838 podStartE2EDuration="2.235696838s" podCreationTimestamp="2025-10-14 13:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:24:42.231429687 +0000 UTC m=+1108.041719006" watchObservedRunningTime="2025-10-14 13:24:42.235696838 +0000 UTC m=+1108.045986177" Oct 14 13:24:42.808256 master-1 kubenswrapper[4740]: I1014 13:24:42.808160 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:42.808517 master-1 kubenswrapper[4740]: I1014 13:24:42.808257 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:44.421337 master-1 kubenswrapper[4740]: I1014 13:24:44.421277 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz"] Oct 14 13:24:44.421874 master-1 kubenswrapper[4740]: I1014 13:24:44.421601 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" containerID="cri-o://f2e2740652494e2a8601bb964a94737bdc249abe23a6463336f3a8b42bda2bba" gracePeriod=120 Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: I1014 13:24:45.613282 4740 patch_prober.go:28] interesting pod/metrics-server-8475fbcb68-p4n8s container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [+]metric-storage-ready ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [+]metric-informer-sync ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [+]metadata-informer-sync ok Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:24:45.613360 master-1 kubenswrapper[4740]: I1014 13:24:45.613366 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: I1014 13:24:45.741900 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:24:45.741990 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:24:45.742741 master-1 kubenswrapper[4740]: I1014 13:24:45.742000 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:24:46.605076 master-1 kubenswrapper[4740]: I1014 13:24:46.605018 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-8-master-1"] Oct 14 13:24:46.605355 master-1 kubenswrapper[4740]: I1014 13:24:46.605320 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-8-master-1" podUID="26c68b53-ad48-4681-9146-e0221d3f080e" containerName="installer" containerID="cri-o://b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24" gracePeriod=30 Oct 14 13:24:47.808341 master-1 kubenswrapper[4740]: I1014 13:24:47.808220 4740 patch_prober.go:28] interesting pod/apiserver-595d5f74d8-hck8v container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Oct 14 13:24:47.809532 master-1 kubenswrapper[4740]: I1014 13:24:47.808356 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.128.0.73:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.73:8443: connect: connection refused" Oct 14 13:24:48.225268 master-1 kubenswrapper[4740]: I1014 13:24:48.225200 4740 generic.go:334] "Generic (PLEG): container finished" podID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerID="194c25a7f27d321abe7b43f432aa05c8f7acba7f239a24bf7b4072916b25b5f2" exitCode=0 Oct 14 13:24:48.225502 master-1 kubenswrapper[4740]: I1014 13:24:48.225259 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerDied","Data":"194c25a7f27d321abe7b43f432aa05c8f7acba7f239a24bf7b4072916b25b5f2"} Oct 14 13:24:48.558059 master-1 kubenswrapper[4740]: I1014 13:24:48.557977 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:24:48.596783 master-1 kubenswrapper[4740]: I1014 13:24:48.596702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-config\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.596783 master-1 kubenswrapper[4740]: I1014 13:24:48.596775 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-node-pullsecrets\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597103 master-1 kubenswrapper[4740]: I1014 13:24:48.596887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-serving-ca\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597103 master-1 kubenswrapper[4740]: I1014 13:24:48.596929 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgp4t\" (UniqueName: \"kubernetes.io/projected/a0a34636-f938-4d5d-952c-68b1433d01cc-kube-api-access-tgp4t\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597103 master-1 kubenswrapper[4740]: I1014 13:24:48.596945 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:48.597103 master-1 kubenswrapper[4740]: I1014 13:24:48.597007 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-audit-dir\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597400 master-1 kubenswrapper[4740]: I1014 13:24:48.597112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-image-import-ca\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597400 master-1 kubenswrapper[4740]: I1014 13:24:48.597173 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-client\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597400 master-1 kubenswrapper[4740]: I1014 13:24:48.597267 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-audit\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597400 master-1 kubenswrapper[4740]: I1014 13:24:48.597314 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-trusted-ca-bundle\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597400 master-1 kubenswrapper[4740]: I1014 13:24:48.597352 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-encryption-config\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.597400 master-1 kubenswrapper[4740]: I1014 13:24:48.597385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-serving-cert\") pod \"a0a34636-f938-4d5d-952c-68b1433d01cc\" (UID: \"a0a34636-f938-4d5d-952c-68b1433d01cc\") " Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.597458 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.597515 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-config" (OuterVolumeSpecName: "config") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.597876 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.597913 4740 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-node-pullsecrets\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.597946 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.597979 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:24:48.598164 master-1 kubenswrapper[4740]: I1014 13:24:48.598045 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:48.598642 master-1 kubenswrapper[4740]: I1014 13:24:48.598320 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-audit" (OuterVolumeSpecName: "audit") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:24:48.598713 master-1 kubenswrapper[4740]: I1014 13:24:48.598630 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:24:48.601122 master-1 kubenswrapper[4740]: I1014 13:24:48.601083 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:24:48.601770 master-1 kubenswrapper[4740]: I1014 13:24:48.601712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:24:48.602932 master-1 kubenswrapper[4740]: I1014 13:24:48.602872 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:24:48.604573 master-1 kubenswrapper[4740]: I1014 13:24:48.604489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a34636-f938-4d5d-952c-68b1433d01cc-kube-api-access-tgp4t" (OuterVolumeSpecName: "kube-api-access-tgp4t") pod "a0a34636-f938-4d5d-952c-68b1433d01cc" (UID: "a0a34636-f938-4d5d-952c-68b1433d01cc"). InnerVolumeSpecName "kube-api-access-tgp4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:24:48.699619 master-1 kubenswrapper[4740]: I1014 13:24:48.699533 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0a34636-f938-4d5d-952c-68b1433d01cc-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.699619 master-1 kubenswrapper[4740]: I1014 13:24:48.699588 4740 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-image-import-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.699619 master-1 kubenswrapper[4740]: I1014 13:24:48.699603 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.699619 master-1 kubenswrapper[4740]: I1014 13:24:48.699616 4740 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-audit\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.699619 master-1 kubenswrapper[4740]: I1014 13:24:48.699632 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0a34636-f938-4d5d-952c-68b1433d01cc-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.700017 master-1 kubenswrapper[4740]: I1014 13:24:48.699645 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.700017 master-1 kubenswrapper[4740]: I1014 13:24:48.699659 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0a34636-f938-4d5d-952c-68b1433d01cc-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:48.700017 master-1 kubenswrapper[4740]: I1014 13:24:48.699673 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgp4t\" (UniqueName: \"kubernetes.io/projected/a0a34636-f938-4d5d-952c-68b1433d01cc-kube-api-access-tgp4t\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:49.233673 master-1 kubenswrapper[4740]: I1014 13:24:49.233603 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" event={"ID":"a0a34636-f938-4d5d-952c-68b1433d01cc","Type":"ContainerDied","Data":"b18bff52d4d529f9e5b8390d13649b4b130d79b766b7f0cd81c86ad46f6aee87"} Oct 14 13:24:49.234385 master-1 kubenswrapper[4740]: I1014 13:24:49.233696 4740 scope.go:117] "RemoveContainer" containerID="1d3ba628773d880348e99b016c5d83127177dbbd2f44204a133e0dcdcec7087c" Oct 14 13:24:49.234558 master-1 kubenswrapper[4740]: I1014 13:24:49.234525 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-595d5f74d8-hck8v" Oct 14 13:24:49.248117 master-1 kubenswrapper[4740]: I1014 13:24:49.248073 4740 scope.go:117] "RemoveContainer" containerID="194c25a7f27d321abe7b43f432aa05c8f7acba7f239a24bf7b4072916b25b5f2" Oct 14 13:24:49.262905 master-1 kubenswrapper[4740]: I1014 13:24:49.262879 4740 scope.go:117] "RemoveContainer" containerID="e7b6632cec156bb361e2d5f2986265a8f548f804f2296c2d5dc4f2d8ae5613d7" Oct 14 13:24:49.384241 master-1 kubenswrapper[4740]: I1014 13:24:49.384149 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-595d5f74d8-hck8v"] Oct 14 13:24:49.387530 master-1 kubenswrapper[4740]: I1014 13:24:49.387479 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-595d5f74d8-hck8v"] Oct 14 13:24:49.489189 master-1 kubenswrapper[4740]: I1014 13:24:49.489103 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-65499f9774-hhfd6"] Oct 14 13:24:49.489518 master-1 kubenswrapper[4740]: E1014 13:24:49.489463 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver-check-endpoints" Oct 14 13:24:49.489518 master-1 kubenswrapper[4740]: I1014 13:24:49.489480 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver-check-endpoints" Oct 14 13:24:49.489518 master-1 kubenswrapper[4740]: E1014 13:24:49.489498 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" Oct 14 13:24:49.489518 master-1 kubenswrapper[4740]: I1014 13:24:49.489507 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" Oct 14 13:24:49.489518 master-1 kubenswrapper[4740]: E1014 13:24:49.489521 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="fix-audit-permissions" Oct 14 13:24:49.489742 master-1 kubenswrapper[4740]: I1014 13:24:49.489530 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="fix-audit-permissions" Oct 14 13:24:49.489742 master-1 kubenswrapper[4740]: I1014 13:24:49.489650 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver" Oct 14 13:24:49.489742 master-1 kubenswrapper[4740]: I1014 13:24:49.489665 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" containerName="openshift-apiserver-check-endpoints" Oct 14 13:24:49.490730 master-1 kubenswrapper[4740]: I1014 13:24:49.490692 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.493133 master-1 kubenswrapper[4740]: W1014 13:24:49.493078 4740 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493220 master-1 kubenswrapper[4740]: W1014 13:24:49.493140 4740 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493296 master-1 kubenswrapper[4740]: E1014 13:24:49.493154 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493296 master-1 kubenswrapper[4740]: E1014 13:24:49.493264 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493392 master-1 kubenswrapper[4740]: W1014 13:24:49.493360 4740 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493476 master-1 kubenswrapper[4740]: E1014 13:24:49.493403 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493583 master-1 kubenswrapper[4740]: W1014 13:24:49.493543 4740 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493630 master-1 kubenswrapper[4740]: E1014 13:24:49.493583 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493630 master-1 kubenswrapper[4740]: W1014 13:24:49.493561 4740 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493630 master-1 kubenswrapper[4740]: W1014 13:24:49.493599 4740 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-95k8q": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-95k8q" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493820 master-1 kubenswrapper[4740]: E1014 13:24:49.493620 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493820 master-1 kubenswrapper[4740]: E1014 13:24:49.493674 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-95k8q\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-95k8q\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493820 master-1 kubenswrapper[4740]: W1014 13:24:49.493736 4740 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493820 master-1 kubenswrapper[4740]: E1014 13:24:49.493761 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.493820 master-1 kubenswrapper[4740]: W1014 13:24:49.493767 4740 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.493820 master-1 kubenswrapper[4740]: E1014 13:24:49.493791 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.494331 master-1 kubenswrapper[4740]: W1014 13:24:49.494241 4740 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.494331 master-1 kubenswrapper[4740]: E1014 13:24:49.494264 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.494331 master-1 kubenswrapper[4740]: W1014 13:24:49.494281 4740 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:master-1" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.494331 master-1 kubenswrapper[4740]: E1014 13:24:49.494313 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:master-1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.494547 master-1 kubenswrapper[4740]: W1014 13:24:49.494512 4740 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:master-1" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object Oct 14 13:24:49.494547 master-1 kubenswrapper[4740]: E1014 13:24:49.494539 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:master-1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'master-1' and this object" logger="UnhandledError" Oct 14 13:24:49.506687 master-1 kubenswrapper[4740]: I1014 13:24:49.506510 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-65499f9774-hhfd6"] Oct 14 13:24:49.520515 master-1 kubenswrapper[4740]: I1014 13:24:49.520454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-etcd-serving-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.520591 master-1 kubenswrapper[4740]: I1014 13:24:49.520536 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wxw\" (UniqueName: \"kubernetes.io/projected/28636fc7-1c12-4f0d-95fa-10d5810c8d96-kube-api-access-s9wxw\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.520661 master-1 kubenswrapper[4740]: I1014 13:24:49.520625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.520756 master-1 kubenswrapper[4740]: I1014 13:24:49.520723 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28636fc7-1c12-4f0d-95fa-10d5810c8d96-node-pullsecrets\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.520876 master-1 kubenswrapper[4740]: I1014 13:24:49.520832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-etcd-client\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.521096 master-1 kubenswrapper[4740]: I1014 13:24:49.521063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.521144 master-1 kubenswrapper[4740]: I1014 13:24:49.521106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-image-import-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.521192 master-1 kubenswrapper[4740]: I1014 13:24:49.521179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit-dir\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.521286 master-1 kubenswrapper[4740]: I1014 13:24:49.521273 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-serving-cert\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.521325 master-1 kubenswrapper[4740]: I1014 13:24:49.521296 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-encryption-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.521568 master-1 kubenswrapper[4740]: I1014 13:24:49.521515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-trusted-ca-bundle\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.622764 master-1 kubenswrapper[4740]: I1014 13:24:49.622678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-trusted-ca-bundle\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.622764 master-1 kubenswrapper[4740]: I1014 13:24:49.622759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-etcd-serving-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.622818 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wxw\" (UniqueName: \"kubernetes.io/projected/28636fc7-1c12-4f0d-95fa-10d5810c8d96-kube-api-access-s9wxw\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.622882 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.622920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28636fc7-1c12-4f0d-95fa-10d5810c8d96-node-pullsecrets\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.622951 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-etcd-client\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.623006 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.623042 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-image-import-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.623082 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit-dir\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623146 master-1 kubenswrapper[4740]: I1014 13:24:49.623138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-encryption-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623961 master-1 kubenswrapper[4740]: I1014 13:24:49.623175 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-serving-cert\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623961 master-1 kubenswrapper[4740]: I1014 13:24:49.623201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit-dir\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:49.623961 master-1 kubenswrapper[4740]: I1014 13:24:49.623130 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/28636fc7-1c12-4f0d-95fa-10d5810c8d96-node-pullsecrets\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:50.022139 master-1 kubenswrapper[4740]: I1014 13:24:50.022051 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.022139 master-1 kubenswrapper[4740]: I1014 13:24:50.022146 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.022406 master-1 kubenswrapper[4740]: I1014 13:24:50.022178 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.022406 master-1 kubenswrapper[4740]: I1014 13:24:50.022204 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.028700 master-1 kubenswrapper[4740]: I1014 13:24:50.028652 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.031004 master-1 kubenswrapper[4740]: I1014 13:24:50.030942 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.246063 master-1 kubenswrapper[4740]: I1014 13:24:50.245998 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.247617 master-1 kubenswrapper[4740]: I1014 13:24:50.247571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-1" Oct 14 13:24:50.413002 master-1 kubenswrapper[4740]: I1014 13:24:50.412882 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 13:24:50.414316 master-1 kubenswrapper[4740]: I1014 13:24:50.414278 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-trusted-ca-bundle\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:50.433758 master-1 kubenswrapper[4740]: I1014 13:24:50.433735 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 13:24:50.444316 master-1 kubenswrapper[4740]: I1014 13:24:50.443896 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-etcd-serving-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:50.444886 master-1 kubenswrapper[4740]: I1014 13:24:50.444793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 13:24:50.458203 master-1 kubenswrapper[4740]: I1014 13:24:50.458153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-serving-cert\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:50.530769 master-1 kubenswrapper[4740]: I1014 13:24:50.530693 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-95k8q" Oct 14 13:24:50.569449 master-1 kubenswrapper[4740]: I1014 13:24:50.569355 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 13:24:50.579004 master-1 kubenswrapper[4740]: I1014 13:24:50.578931 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-etcd-client\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:50.614496 master-1 kubenswrapper[4740]: I1014 13:24:50.614380 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 14 13:24:50.615681 master-1 kubenswrapper[4740]: I1014 13:24:50.615623 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.623948 master-1 kubenswrapper[4740]: E1014 13:24:50.623865 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.624212 master-1 kubenswrapper[4740]: E1014 13:24:50.623951 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.624212 master-1 kubenswrapper[4740]: E1014 13:24:50.624026 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-image-import-ca podName:28636fc7-1c12-4f0d-95fa-10d5810c8d96 nodeName:}" failed. No retries permitted until 2025-10-14 13:24:51.123992418 +0000 UTC m=+1116.934281747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-image-import-ca") pod "apiserver-65499f9774-hhfd6" (UID: "28636fc7-1c12-4f0d-95fa-10d5810c8d96") : failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.624212 master-1 kubenswrapper[4740]: E1014 13:24:50.624084 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-config podName:28636fc7-1c12-4f0d-95fa-10d5810c8d96 nodeName:}" failed. No retries permitted until 2025-10-14 13:24:51.12405267 +0000 UTC m=+1116.934342029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-config") pod "apiserver-65499f9774-hhfd6" (UID: "28636fc7-1c12-4f0d-95fa-10d5810c8d96") : failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.624212 master-1 kubenswrapper[4740]: E1014 13:24:50.624132 4740 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.624212 master-1 kubenswrapper[4740]: E1014 13:24:50.624185 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit podName:28636fc7-1c12-4f0d-95fa-10d5810c8d96 nodeName:}" failed. No retries permitted until 2025-10-14 13:24:51.124167753 +0000 UTC m=+1116.934457302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit") pod "apiserver-65499f9774-hhfd6" (UID: "28636fc7-1c12-4f0d-95fa-10d5810c8d96") : failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.624572 master-1 kubenswrapper[4740]: E1014 13:24:50.624273 4740 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Oct 14 13:24:50.624572 master-1 kubenswrapper[4740]: E1014 13:24:50.624316 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-encryption-config podName:28636fc7-1c12-4f0d-95fa-10d5810c8d96 nodeName:}" failed. No retries permitted until 2025-10-14 13:24:51.124300516 +0000 UTC m=+1116.934589885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-encryption-config") pod "apiserver-65499f9774-hhfd6" (UID: "28636fc7-1c12-4f0d-95fa-10d5810c8d96") : failed to sync secret cache: timed out waiting for the condition Oct 14 13:24:50.627587 master-1 kubenswrapper[4740]: E1014 13:24:50.626344 4740 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 14 13:24:50.633459 master-1 kubenswrapper[4740]: I1014 13:24:50.633400 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 14 13:24:50.634457 master-1 kubenswrapper[4740]: I1014 13:24:50.634399 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: I1014 13:24:50.742630 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:24:50.742737 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:24:50.743460 master-1 kubenswrapper[4740]: I1014 13:24:50.742780 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:24:50.743460 master-1 kubenswrapper[4740]: I1014 13:24:50.742984 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-var-lock\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.743460 master-1 kubenswrapper[4740]: I1014 13:24:50.743068 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11634530-ae8b-4907-b7f3-5cf28629c92a-kube-api-access\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.743460 master-1 kubenswrapper[4740]: I1014 13:24:50.743267 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-kubelet-dir\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.845604 master-1 kubenswrapper[4740]: I1014 13:24:50.845480 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-kubelet-dir\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.845944 master-1 kubenswrapper[4740]: I1014 13:24:50.845667 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-var-lock\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.845944 master-1 kubenswrapper[4740]: I1014 13:24:50.845701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-kubelet-dir\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.845944 master-1 kubenswrapper[4740]: I1014 13:24:50.845737 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11634530-ae8b-4907-b7f3-5cf28629c92a-kube-api-access\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.845944 master-1 kubenswrapper[4740]: I1014 13:24:50.845896 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-var-lock\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.872735 master-1 kubenswrapper[4740]: I1014 13:24:50.872651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11634530-ae8b-4907-b7f3-5cf28629c92a-kube-api-access\") pod \"installer-9-master-1\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.886622 master-1 kubenswrapper[4740]: I1014 13:24:50.886564 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 13:24:50.936312 master-1 kubenswrapper[4740]: I1014 13:24:50.936190 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 13:24:50.949555 master-1 kubenswrapper[4740]: I1014 13:24:50.949464 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 13:24:50.955058 master-1 kubenswrapper[4740]: I1014 13:24:50.954951 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:50.973445 master-1 kubenswrapper[4740]: I1014 13:24:50.973328 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a34636-f938-4d5d-952c-68b1433d01cc" path="/var/lib/kubelet/pods/a0a34636-f938-4d5d-952c-68b1433d01cc/volumes" Oct 14 13:24:50.987863 master-1 kubenswrapper[4740]: I1014 13:24:50.987784 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 13:24:51.017291 master-1 kubenswrapper[4740]: I1014 13:24:51.016202 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 13:24:51.017291 master-1 kubenswrapper[4740]: E1014 13:24:51.016621 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s9wxw for pod openshift-apiserver/apiserver-65499f9774-hhfd6: [failed to fetch token: serviceaccounts "openshift-apiserver-sa" is forbidden: User "system:node:master-1" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object, failed to sync configmap cache: timed out waiting for the condition] Oct 14 13:24:51.017291 master-1 kubenswrapper[4740]: E1014 13:24:51.016784 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28636fc7-1c12-4f0d-95fa-10d5810c8d96-kube-api-access-s9wxw podName:28636fc7-1c12-4f0d-95fa-10d5810c8d96 nodeName:}" failed. No retries permitted until 2025-10-14 13:24:51.516748438 +0000 UTC m=+1117.327037797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s9wxw" (UniqueName: "kubernetes.io/projected/28636fc7-1c12-4f0d-95fa-10d5810c8d96-kube-api-access-s9wxw") pod "apiserver-65499f9774-hhfd6" (UID: "28636fc7-1c12-4f0d-95fa-10d5810c8d96") : [failed to fetch token: serviceaccounts "openshift-apiserver-sa" is forbidden: User "system:node:master-1" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object, failed to sync configmap cache: timed out waiting for the condition] Oct 14 13:24:51.150790 master-1 kubenswrapper[4740]: I1014 13:24:51.150715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.150790 master-1 kubenswrapper[4740]: I1014 13:24:51.150781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-image-import-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.151139 master-1 kubenswrapper[4740]: I1014 13:24:51.150846 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-encryption-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.151139 master-1 kubenswrapper[4740]: I1014 13:24:51.150976 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.151914 master-1 kubenswrapper[4740]: I1014 13:24:51.151857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-audit\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.152652 master-1 kubenswrapper[4740]: I1014 13:24:51.152613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.153265 master-1 kubenswrapper[4740]: I1014 13:24:51.153208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/28636fc7-1c12-4f0d-95fa-10d5810c8d96-image-import-ca\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.156593 master-1 kubenswrapper[4740]: I1014 13:24:51.156303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28636fc7-1c12-4f0d-95fa-10d5810c8d96-encryption-config\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.494332 master-1 kubenswrapper[4740]: I1014 13:24:51.490019 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 14 13:24:51.507763 master-1 kubenswrapper[4740]: W1014 13:24:51.507675 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod11634530_ae8b_4907_b7f3_5cf28629c92a.slice/crio-73fdf941cd3adc9db410eb7bc1606ace1b9f519ef2c20356d9dda0dee02cf2b1 WatchSource:0}: Error finding container 73fdf941cd3adc9db410eb7bc1606ace1b9f519ef2c20356d9dda0dee02cf2b1: Status 404 returned error can't find the container with id 73fdf941cd3adc9db410eb7bc1606ace1b9f519ef2c20356d9dda0dee02cf2b1 Oct 14 13:24:51.557180 master-1 kubenswrapper[4740]: I1014 13:24:51.557095 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wxw\" (UniqueName: \"kubernetes.io/projected/28636fc7-1c12-4f0d-95fa-10d5810c8d96-kube-api-access-s9wxw\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.582304 master-1 kubenswrapper[4740]: I1014 13:24:51.582210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9wxw\" (UniqueName: \"kubernetes.io/projected/28636fc7-1c12-4f0d-95fa-10d5810c8d96-kube-api-access-s9wxw\") pod \"apiserver-65499f9774-hhfd6\" (UID: \"28636fc7-1c12-4f0d-95fa-10d5810c8d96\") " pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:51.616177 master-1 kubenswrapper[4740]: I1014 13:24:51.616096 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:52.082948 master-1 kubenswrapper[4740]: I1014 13:24:52.082816 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-65499f9774-hhfd6"] Oct 14 13:24:52.094437 master-1 kubenswrapper[4740]: W1014 13:24:52.094360 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28636fc7_1c12_4f0d_95fa_10d5810c8d96.slice/crio-123f69dd155bc3fb4f975abb579241142ffe96a8f4dc7c13f8280e9a5c9c28ac WatchSource:0}: Error finding container 123f69dd155bc3fb4f975abb579241142ffe96a8f4dc7c13f8280e9a5c9c28ac: Status 404 returned error can't find the container with id 123f69dd155bc3fb4f975abb579241142ffe96a8f4dc7c13f8280e9a5c9c28ac Oct 14 13:24:52.263066 master-1 kubenswrapper[4740]: I1014 13:24:52.262997 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" event={"ID":"28636fc7-1c12-4f0d-95fa-10d5810c8d96","Type":"ContainerStarted","Data":"123f69dd155bc3fb4f975abb579241142ffe96a8f4dc7c13f8280e9a5c9c28ac"} Oct 14 13:24:52.266149 master-1 kubenswrapper[4740]: I1014 13:24:52.265723 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"11634530-ae8b-4907-b7f3-5cf28629c92a","Type":"ContainerStarted","Data":"21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34"} Oct 14 13:24:52.266149 master-1 kubenswrapper[4740]: I1014 13:24:52.265780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"11634530-ae8b-4907-b7f3-5cf28629c92a","Type":"ContainerStarted","Data":"73fdf941cd3adc9db410eb7bc1606ace1b9f519ef2c20356d9dda0dee02cf2b1"} Oct 14 13:24:52.284766 master-1 kubenswrapper[4740]: I1014 13:24:52.284670 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-9-master-1" podStartSLOduration=2.284653723 podStartE2EDuration="2.284653723s" podCreationTimestamp="2025-10-14 13:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:24:52.282527588 +0000 UTC m=+1118.092816917" watchObservedRunningTime="2025-10-14 13:24:52.284653723 +0000 UTC m=+1118.094943052" Oct 14 13:24:53.279354 master-1 kubenswrapper[4740]: I1014 13:24:53.279209 4740 generic.go:334] "Generic (PLEG): container finished" podID="28636fc7-1c12-4f0d-95fa-10d5810c8d96" containerID="8ac984be4371e720ca279d274dcd2241c955c1782f85aae9d06cb1c500def94f" exitCode=0 Oct 14 13:24:53.280030 master-1 kubenswrapper[4740]: I1014 13:24:53.279681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" event={"ID":"28636fc7-1c12-4f0d-95fa-10d5810c8d96","Type":"ContainerDied","Data":"8ac984be4371e720ca279d274dcd2241c955c1782f85aae9d06cb1c500def94f"} Oct 14 13:24:54.292972 master-1 kubenswrapper[4740]: I1014 13:24:54.292883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" event={"ID":"28636fc7-1c12-4f0d-95fa-10d5810c8d96","Type":"ContainerStarted","Data":"d465bfd4578a529c1fb5e1fba10356a9ebf44ec8aed617bc63f95700b2006baa"} Oct 14 13:24:54.292972 master-1 kubenswrapper[4740]: I1014 13:24:54.292957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" event={"ID":"28636fc7-1c12-4f0d-95fa-10d5810c8d96","Type":"ContainerStarted","Data":"a56da484688a500e3b65080e2971895c3578d0a22c7e8d3d3c9a8fa22f5b2f83"} Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: I1014 13:24:55.742964 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:24:55.743042 master-1 kubenswrapper[4740]: I1014 13:24:55.743045 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:24:55.744620 master-1 kubenswrapper[4740]: I1014 13:24:55.743072 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" podStartSLOduration=10.743046807 podStartE2EDuration="10.743046807s" podCreationTimestamp="2025-10-14 13:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:24:54.365888423 +0000 UTC m=+1120.176177772" watchObservedRunningTime="2025-10-14 13:24:55.743046807 +0000 UTC m=+1121.553336206" Oct 14 13:24:55.744620 master-1 kubenswrapper[4740]: I1014 13:24:55.743145 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:24:55.746911 master-1 kubenswrapper[4740]: I1014 13:24:55.746788 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-77d8f866f9-skvf6"] Oct 14 13:24:55.809137 master-1 kubenswrapper[4740]: I1014 13:24:55.809036 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 14 13:24:55.809586 master-1 kubenswrapper[4740]: I1014 13:24:55.809355 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-9-master-1" podUID="11634530-ae8b-4907-b7f3-5cf28629c92a" containerName="installer" containerID="cri-o://21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34" gracePeriod=30 Oct 14 13:24:56.242448 master-1 kubenswrapper[4740]: I1014 13:24:56.242418 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-9-master-1_11634530-ae8b-4907-b7f3-5cf28629c92a/installer/0.log" Oct 14 13:24:56.242665 master-1 kubenswrapper[4740]: I1014 13:24:56.242653 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:56.313358 master-1 kubenswrapper[4740]: I1014 13:24:56.312375 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-9-master-1_11634530-ae8b-4907-b7f3-5cf28629c92a/installer/0.log" Oct 14 13:24:56.313358 master-1 kubenswrapper[4740]: I1014 13:24:56.312439 4740 generic.go:334] "Generic (PLEG): container finished" podID="11634530-ae8b-4907-b7f3-5cf28629c92a" containerID="21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34" exitCode=1 Oct 14 13:24:56.313358 master-1 kubenswrapper[4740]: I1014 13:24:56.312475 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"11634530-ae8b-4907-b7f3-5cf28629c92a","Type":"ContainerDied","Data":"21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34"} Oct 14 13:24:56.313358 master-1 kubenswrapper[4740]: I1014 13:24:56.312513 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-9-master-1" event={"ID":"11634530-ae8b-4907-b7f3-5cf28629c92a","Type":"ContainerDied","Data":"73fdf941cd3adc9db410eb7bc1606ace1b9f519ef2c20356d9dda0dee02cf2b1"} Oct 14 13:24:56.313358 master-1 kubenswrapper[4740]: I1014 13:24:56.312552 4740 scope.go:117] "RemoveContainer" containerID="21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34" Oct 14 13:24:56.313358 master-1 kubenswrapper[4740]: I1014 13:24:56.312735 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-9-master-1" Oct 14 13:24:56.324574 master-1 kubenswrapper[4740]: I1014 13:24:56.324510 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11634530-ae8b-4907-b7f3-5cf28629c92a-kube-api-access\") pod \"11634530-ae8b-4907-b7f3-5cf28629c92a\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " Oct 14 13:24:56.324814 master-1 kubenswrapper[4740]: I1014 13:24:56.324614 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-kubelet-dir\") pod \"11634530-ae8b-4907-b7f3-5cf28629c92a\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " Oct 14 13:24:56.324814 master-1 kubenswrapper[4740]: I1014 13:24:56.324682 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-var-lock\") pod \"11634530-ae8b-4907-b7f3-5cf28629c92a\" (UID: \"11634530-ae8b-4907-b7f3-5cf28629c92a\") " Oct 14 13:24:56.324814 master-1 kubenswrapper[4740]: I1014 13:24:56.324780 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11634530-ae8b-4907-b7f3-5cf28629c92a" (UID: "11634530-ae8b-4907-b7f3-5cf28629c92a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:56.325000 master-1 kubenswrapper[4740]: I1014 13:24:56.324844 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-var-lock" (OuterVolumeSpecName: "var-lock") pod "11634530-ae8b-4907-b7f3-5cf28629c92a" (UID: "11634530-ae8b-4907-b7f3-5cf28629c92a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:24:56.325000 master-1 kubenswrapper[4740]: I1014 13:24:56.324969 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:56.325000 master-1 kubenswrapper[4740]: I1014 13:24:56.324982 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11634530-ae8b-4907-b7f3-5cf28629c92a-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:56.328269 master-1 kubenswrapper[4740]: I1014 13:24:56.328201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11634530-ae8b-4907-b7f3-5cf28629c92a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11634530-ae8b-4907-b7f3-5cf28629c92a" (UID: "11634530-ae8b-4907-b7f3-5cf28629c92a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:24:56.333297 master-1 kubenswrapper[4740]: I1014 13:24:56.333254 4740 scope.go:117] "RemoveContainer" containerID="21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34" Oct 14 13:24:56.333816 master-1 kubenswrapper[4740]: E1014 13:24:56.333761 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34\": container with ID starting with 21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34 not found: ID does not exist" containerID="21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34" Oct 14 13:24:56.333816 master-1 kubenswrapper[4740]: I1014 13:24:56.333798 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34"} err="failed to get container status \"21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34\": rpc error: code = NotFound desc = could not find container \"21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34\": container with ID starting with 21923472a8911889a682a67af64e22aa2354d603e4bbbd848f5349af719b7b34 not found: ID does not exist" Oct 14 13:24:56.426653 master-1 kubenswrapper[4740]: I1014 13:24:56.426540 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11634530-ae8b-4907-b7f3-5cf28629c92a-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:24:56.616726 master-1 kubenswrapper[4740]: I1014 13:24:56.616519 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:56.616726 master-1 kubenswrapper[4740]: I1014 13:24:56.616587 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:56.625771 master-1 kubenswrapper[4740]: I1014 13:24:56.625333 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:56.818565 master-1 kubenswrapper[4740]: I1014 13:24:56.818474 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 14 13:24:56.843406 master-1 kubenswrapper[4740]: I1014 13:24:56.843304 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-9-master-1"] Oct 14 13:24:56.958589 master-1 kubenswrapper[4740]: I1014 13:24:56.958312 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11634530-ae8b-4907-b7f3-5cf28629c92a" path="/var/lib/kubelet/pods/11634530-ae8b-4907-b7f3-5cf28629c92a/volumes" Oct 14 13:24:57.328515 master-1 kubenswrapper[4740]: I1014 13:24:57.328395 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-65499f9774-hhfd6" Oct 14 13:24:59.205563 master-1 kubenswrapper[4740]: I1014 13:24:59.205462 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-10-master-1"] Oct 14 13:24:59.206547 master-1 kubenswrapper[4740]: E1014 13:24:59.205829 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11634530-ae8b-4907-b7f3-5cf28629c92a" containerName="installer" Oct 14 13:24:59.206547 master-1 kubenswrapper[4740]: I1014 13:24:59.205850 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="11634530-ae8b-4907-b7f3-5cf28629c92a" containerName="installer" Oct 14 13:24:59.206547 master-1 kubenswrapper[4740]: I1014 13:24:59.206066 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="11634530-ae8b-4907-b7f3-5cf28629c92a" containerName="installer" Oct 14 13:24:59.206896 master-1 kubenswrapper[4740]: I1014 13:24:59.206847 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.224061 master-1 kubenswrapper[4740]: I1014 13:24:59.223821 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-1"] Oct 14 13:24:59.275001 master-1 kubenswrapper[4740]: I1014 13:24:59.274905 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kube-api-access\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.275280 master-1 kubenswrapper[4740]: I1014 13:24:59.275066 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kubelet-dir\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.275280 master-1 kubenswrapper[4740]: I1014 13:24:59.275100 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-var-lock\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.377348 master-1 kubenswrapper[4740]: I1014 13:24:59.377250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kube-api-access\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.377599 master-1 kubenswrapper[4740]: I1014 13:24:59.377409 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kubelet-dir\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.377599 master-1 kubenswrapper[4740]: I1014 13:24:59.377447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-var-lock\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.377599 master-1 kubenswrapper[4740]: I1014 13:24:59.377573 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-var-lock\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.377740 master-1 kubenswrapper[4740]: I1014 13:24:59.377589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kubelet-dir\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.409128 master-1 kubenswrapper[4740]: I1014 13:24:59.409062 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kube-api-access\") pod \"installer-10-master-1\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.526909 master-1 kubenswrapper[4740]: I1014 13:24:59.526803 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 14 13:24:59.980807 master-1 kubenswrapper[4740]: I1014 13:24:59.980640 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-10-master-1"] Oct 14 13:25:00.350107 master-1 kubenswrapper[4740]: I1014 13:25:00.350027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf","Type":"ContainerStarted","Data":"a794ebe78f12315605d06b3ce426051a6431f1ae1a3d07468a6ec2a86ebc702a"} Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: I1014 13:25:00.752868 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:00.752964 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:00.755936 master-1 kubenswrapper[4740]: I1014 13:25:00.752999 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:01.360123 master-1 kubenswrapper[4740]: I1014 13:25:01.360028 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf","Type":"ContainerStarted","Data":"c4889899325fa0123ff00f8c9f15c55e1a001211422e38ff28c0cfa66549f17e"} Oct 14 13:25:01.393840 master-1 kubenswrapper[4740]: I1014 13:25:01.393670 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-10-master-1" podStartSLOduration=2.393643753 podStartE2EDuration="2.393643753s" podCreationTimestamp="2025-10-14 13:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:25:01.392020101 +0000 UTC m=+1127.202309460" watchObservedRunningTime="2025-10-14 13:25:01.393643753 +0000 UTC m=+1127.203933112" Oct 14 13:25:02.373630 master-1 kubenswrapper[4740]: I1014 13:25:02.373557 4740 generic.go:334] "Generic (PLEG): container finished" podID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerID="ca6fc295da9f3231ac56c683e895278718ac1b23a52cca0c02cbe23b7495fbcc" exitCode=0 Oct 14 13:25:02.374622 master-1 kubenswrapper[4740]: I1014 13:25:02.373676 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" event={"ID":"fef43de0-1319-41d0-9ca4-d4795c56c459","Type":"ContainerDied","Data":"ca6fc295da9f3231ac56c683e895278718ac1b23a52cca0c02cbe23b7495fbcc"} Oct 14 13:25:02.430205 master-1 kubenswrapper[4740]: I1014 13:25:02.430115 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:25:02.549410 master-1 kubenswrapper[4740]: I1014 13:25:02.549324 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-metrics-server-audit-profiles\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.549625 master-1 kubenswrapper[4740]: I1014 13:25:02.549463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.549625 master-1 kubenswrapper[4740]: I1014 13:25:02.549526 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-client-certs\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.549625 master-1 kubenswrapper[4740]: I1014 13:25:02.549577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-configmap-kubelet-serving-ca-bundle\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.549746 master-1 kubenswrapper[4740]: I1014 13:25:02.549671 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qffb6\" (UniqueName: \"kubernetes.io/projected/fef43de0-1319-41d0-9ca4-d4795c56c459-kube-api-access-qffb6\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.550631 master-1 kubenswrapper[4740]: I1014 13:25:02.550574 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:02.550786 master-1 kubenswrapper[4740]: I1014 13:25:02.550750 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fef43de0-1319-41d0-9ca4-d4795c56c459-audit-log\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.550888 master-1 kubenswrapper[4740]: I1014 13:25:02.550791 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:02.550965 master-1 kubenswrapper[4740]: I1014 13:25:02.550841 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-server-tls\") pod \"fef43de0-1319-41d0-9ca4-d4795c56c459\" (UID: \"fef43de0-1319-41d0-9ca4-d4795c56c459\") " Oct 14 13:25:02.551896 master-1 kubenswrapper[4740]: I1014 13:25:02.551840 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef43de0-1319-41d0-9ca4-d4795c56c459-audit-log" (OuterVolumeSpecName: "audit-log") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:25:02.552290 master-1 kubenswrapper[4740]: I1014 13:25:02.552216 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-metrics-server-audit-profiles\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:02.552373 master-1 kubenswrapper[4740]: I1014 13:25:02.552307 4740 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef43de0-1319-41d0-9ca4-d4795c56c459-configmap-kubelet-serving-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:02.552419 master-1 kubenswrapper[4740]: I1014 13:25:02.552363 4740 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/fef43de0-1319-41d0-9ca4-d4795c56c459-audit-log\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:02.552462 master-1 kubenswrapper[4740]: I1014 13:25:02.552421 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:02.554390 master-1 kubenswrapper[4740]: I1014 13:25:02.554343 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:02.555373 master-1 kubenswrapper[4740]: I1014 13:25:02.555327 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:02.560597 master-1 kubenswrapper[4740]: I1014 13:25:02.560512 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef43de0-1319-41d0-9ca4-d4795c56c459-kube-api-access-qffb6" (OuterVolumeSpecName: "kube-api-access-qffb6") pod "fef43de0-1319-41d0-9ca4-d4795c56c459" (UID: "fef43de0-1319-41d0-9ca4-d4795c56c459"). InnerVolumeSpecName "kube-api-access-qffb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:25:02.672653 master-1 kubenswrapper[4740]: I1014 13:25:02.653703 4740 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-server-tls\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:02.672653 master-1 kubenswrapper[4740]: I1014 13:25:02.653762 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-client-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:02.672653 master-1 kubenswrapper[4740]: I1014 13:25:02.653784 4740 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/fef43de0-1319-41d0-9ca4-d4795c56c459-secret-metrics-client-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:02.672653 master-1 kubenswrapper[4740]: I1014 13:25:02.653806 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qffb6\" (UniqueName: \"kubernetes.io/projected/fef43de0-1319-41d0-9ca4-d4795c56c459-kube-api-access-qffb6\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:03.395224 master-1 kubenswrapper[4740]: I1014 13:25:03.395132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" event={"ID":"fef43de0-1319-41d0-9ca4-d4795c56c459","Type":"ContainerDied","Data":"d518677c76d3497ca4266cf5076f07055ff804f4cf7d9d111123d0d3bcda4401"} Oct 14 13:25:03.396117 master-1 kubenswrapper[4740]: I1014 13:25:03.395330 4740 scope.go:117] "RemoveContainer" containerID="ca6fc295da9f3231ac56c683e895278718ac1b23a52cca0c02cbe23b7495fbcc" Oct 14 13:25:03.396117 master-1 kubenswrapper[4740]: I1014 13:25:03.395727 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8475fbcb68-p4n8s" Oct 14 13:25:03.436557 master-1 kubenswrapper[4740]: I1014 13:25:03.435456 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-8475fbcb68-p4n8s"] Oct 14 13:25:03.448032 master-1 kubenswrapper[4740]: I1014 13:25:03.447952 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-8475fbcb68-p4n8s"] Oct 14 13:25:04.957392 master-1 kubenswrapper[4740]: I1014 13:25:04.957341 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" path="/var/lib/kubelet/pods/fef43de0-1319-41d0-9ca4-d4795c56c459/volumes" Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: I1014 13:25:05.746460 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:05.746548 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:05.747468 master-1 kubenswrapper[4740]: I1014 13:25:05.746569 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: I1014 13:25:10.743030 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:10.744926 master-1 kubenswrapper[4740]: I1014 13:25:10.743143 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:12.907058 master-1 kubenswrapper[4740]: I1014 13:25:12.906955 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-8-master-1_26c68b53-ad48-4681-9146-e0221d3f080e/installer/0.log" Oct 14 13:25:12.907058 master-1 kubenswrapper[4740]: I1014 13:25:12.907040 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-1" Oct 14 13:25:12.974387 master-1 kubenswrapper[4740]: I1014 13:25:12.974139 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-var-lock\") pod \"26c68b53-ad48-4681-9146-e0221d3f080e\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " Oct 14 13:25:12.974661 master-1 kubenswrapper[4740]: I1014 13:25:12.974293 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-var-lock" (OuterVolumeSpecName: "var-lock") pod "26c68b53-ad48-4681-9146-e0221d3f080e" (UID: "26c68b53-ad48-4681-9146-e0221d3f080e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:25:12.974661 master-1 kubenswrapper[4740]: I1014 13:25:12.974463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c68b53-ad48-4681-9146-e0221d3f080e-kube-api-access\") pod \"26c68b53-ad48-4681-9146-e0221d3f080e\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " Oct 14 13:25:12.975043 master-1 kubenswrapper[4740]: I1014 13:25:12.974976 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-kubelet-dir\") pod \"26c68b53-ad48-4681-9146-e0221d3f080e\" (UID: \"26c68b53-ad48-4681-9146-e0221d3f080e\") " Oct 14 13:25:12.975157 master-1 kubenswrapper[4740]: I1014 13:25:12.975067 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "26c68b53-ad48-4681-9146-e0221d3f080e" (UID: "26c68b53-ad48-4681-9146-e0221d3f080e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:25:12.975656 master-1 kubenswrapper[4740]: I1014 13:25:12.975614 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:12.975757 master-1 kubenswrapper[4740]: I1014 13:25:12.975697 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26c68b53-ad48-4681-9146-e0221d3f080e-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:12.979590 master-1 kubenswrapper[4740]: I1014 13:25:12.979528 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c68b53-ad48-4681-9146-e0221d3f080e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "26c68b53-ad48-4681-9146-e0221d3f080e" (UID: "26c68b53-ad48-4681-9146-e0221d3f080e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:25:13.077395 master-1 kubenswrapper[4740]: I1014 13:25:13.077304 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26c68b53-ad48-4681-9146-e0221d3f080e-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:13.477778 master-1 kubenswrapper[4740]: I1014 13:25:13.477733 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-8-master-1_26c68b53-ad48-4681-9146-e0221d3f080e/installer/0.log" Oct 14 13:25:13.478162 master-1 kubenswrapper[4740]: I1014 13:25:13.478125 4740 generic.go:334] "Generic (PLEG): container finished" podID="26c68b53-ad48-4681-9146-e0221d3f080e" containerID="b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24" exitCode=1 Oct 14 13:25:13.478382 master-1 kubenswrapper[4740]: I1014 13:25:13.478302 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-8-master-1" Oct 14 13:25:13.478704 master-1 kubenswrapper[4740]: I1014 13:25:13.478307 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-1" event={"ID":"26c68b53-ad48-4681-9146-e0221d3f080e","Type":"ContainerDied","Data":"b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24"} Oct 14 13:25:13.478827 master-1 kubenswrapper[4740]: I1014 13:25:13.478760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-8-master-1" event={"ID":"26c68b53-ad48-4681-9146-e0221d3f080e","Type":"ContainerDied","Data":"e55d5407ff1cbb4be6373351e7cd0b629505627ac4d38c22e32433d1ef2e91ca"} Oct 14 13:25:13.478900 master-1 kubenswrapper[4740]: I1014 13:25:13.478823 4740 scope.go:117] "RemoveContainer" containerID="b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24" Oct 14 13:25:13.504465 master-1 kubenswrapper[4740]: I1014 13:25:13.504396 4740 scope.go:117] "RemoveContainer" containerID="b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24" Oct 14 13:25:13.505052 master-1 kubenswrapper[4740]: E1014 13:25:13.504989 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24\": container with ID starting with b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24 not found: ID does not exist" containerID="b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24" Oct 14 13:25:13.505192 master-1 kubenswrapper[4740]: I1014 13:25:13.505058 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24"} err="failed to get container status \"b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24\": rpc error: code = NotFound desc = could not find container \"b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24\": container with ID starting with b3d11c58aace10eaaa6cd1baf6a6bf9d3efd225b4797b3c39cffb50302d6de24 not found: ID does not exist" Oct 14 13:25:13.537352 master-1 kubenswrapper[4740]: I1014 13:25:13.537184 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-8-master-1"] Oct 14 13:25:13.546794 master-1 kubenswrapper[4740]: I1014 13:25:13.546705 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-8-master-1"] Oct 14 13:25:14.960754 master-1 kubenswrapper[4740]: I1014 13:25:14.960696 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c68b53-ad48-4681-9146-e0221d3f080e" path="/var/lib/kubelet/pods/26c68b53-ad48-4681-9146-e0221d3f080e/volumes" Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: I1014 13:25:15.741920 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:15.742050 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:15.743137 master-1 kubenswrapper[4740]: I1014 13:25:15.742037 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: I1014 13:25:20.744949 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:20.745056 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:20.746911 master-1 kubenswrapper[4740]: I1014 13:25:20.745054 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:20.810454 master-1 kubenswrapper[4740]: I1014 13:25:20.810362 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-77d8f866f9-skvf6" podUID="fe87fbd6-00fb-4304-b1c8-70ff91c6b278" containerName="console" containerID="cri-o://f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2" gracePeriod=15 Oct 14 13:25:21.309099 master-1 kubenswrapper[4740]: I1014 13:25:21.308906 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-77d8f866f9-skvf6_fe87fbd6-00fb-4304-b1c8-70ff91c6b278/console/0.log" Oct 14 13:25:21.309099 master-1 kubenswrapper[4740]: I1014 13:25:21.309012 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:25:21.411028 master-1 kubenswrapper[4740]: I1014 13:25:21.410935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-service-ca\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.411028 master-1 kubenswrapper[4740]: I1014 13:25:21.411024 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-config\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.411379 master-1 kubenswrapper[4740]: I1014 13:25:21.411075 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-oauth-serving-cert\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.411379 master-1 kubenswrapper[4740]: I1014 13:25:21.411131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-trusted-ca-bundle\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.411379 master-1 kubenswrapper[4740]: I1014 13:25:21.411208 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-serving-cert\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.411379 master-1 kubenswrapper[4740]: I1014 13:25:21.411267 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-oauth-config\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.411379 master-1 kubenswrapper[4740]: I1014 13:25:21.411328 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7w6g\" (UniqueName: \"kubernetes.io/projected/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-kube-api-access-b7w6g\") pod \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\" (UID: \"fe87fbd6-00fb-4304-b1c8-70ff91c6b278\") " Oct 14 13:25:21.412172 master-1 kubenswrapper[4740]: I1014 13:25:21.412112 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:21.412477 master-1 kubenswrapper[4740]: I1014 13:25:21.412408 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:21.412477 master-1 kubenswrapper[4740]: I1014 13:25:21.412410 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-config" (OuterVolumeSpecName: "console-config") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:21.412705 master-1 kubenswrapper[4740]: I1014 13:25:21.412667 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-service-ca" (OuterVolumeSpecName: "service-ca") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:21.414408 master-1 kubenswrapper[4740]: I1014 13:25:21.414352 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:21.415704 master-1 kubenswrapper[4740]: I1014 13:25:21.415624 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:21.417632 master-1 kubenswrapper[4740]: I1014 13:25:21.417578 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-kube-api-access-b7w6g" (OuterVolumeSpecName: "kube-api-access-b7w6g") pod "fe87fbd6-00fb-4304-b1c8-70ff91c6b278" (UID: "fe87fbd6-00fb-4304-b1c8-70ff91c6b278"). InnerVolumeSpecName "kube-api-access-b7w6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:25:21.513167 master-1 kubenswrapper[4740]: I1014 13:25:21.513062 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-service-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.513167 master-1 kubenswrapper[4740]: I1014 13:25:21.513134 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.513167 master-1 kubenswrapper[4740]: I1014 13:25:21.513155 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-oauth-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.513167 master-1 kubenswrapper[4740]: I1014 13:25:21.513177 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.513682 master-1 kubenswrapper[4740]: I1014 13:25:21.513195 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-oauth-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.513682 master-1 kubenswrapper[4740]: I1014 13:25:21.513212 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-console-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.513682 master-1 kubenswrapper[4740]: I1014 13:25:21.513311 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7w6g\" (UniqueName: \"kubernetes.io/projected/fe87fbd6-00fb-4304-b1c8-70ff91c6b278-kube-api-access-b7w6g\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:21.563399 master-1 kubenswrapper[4740]: I1014 13:25:21.563178 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-77d8f866f9-skvf6_fe87fbd6-00fb-4304-b1c8-70ff91c6b278/console/0.log" Oct 14 13:25:21.563683 master-1 kubenswrapper[4740]: I1014 13:25:21.563481 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77d8f866f9-skvf6" event={"ID":"fe87fbd6-00fb-4304-b1c8-70ff91c6b278","Type":"ContainerDied","Data":"f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2"} Oct 14 13:25:21.563683 master-1 kubenswrapper[4740]: I1014 13:25:21.563605 4740 scope.go:117] "RemoveContainer" containerID="f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2" Oct 14 13:25:21.563683 master-1 kubenswrapper[4740]: I1014 13:25:21.563401 4740 generic.go:334] "Generic (PLEG): container finished" podID="fe87fbd6-00fb-4304-b1c8-70ff91c6b278" containerID="f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2" exitCode=2 Oct 14 13:25:21.564023 master-1 kubenswrapper[4740]: I1014 13:25:21.563743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77d8f866f9-skvf6" event={"ID":"fe87fbd6-00fb-4304-b1c8-70ff91c6b278","Type":"ContainerDied","Data":"03e1bae33777efe1bd0baf164ff5ad35bbfa1d3bd4a412da0313adfcc87a5400"} Oct 14 13:25:21.564345 master-1 kubenswrapper[4740]: I1014 13:25:21.564277 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77d8f866f9-skvf6" Oct 14 13:25:21.591328 master-1 kubenswrapper[4740]: I1014 13:25:21.591266 4740 scope.go:117] "RemoveContainer" containerID="f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2" Oct 14 13:25:21.591985 master-1 kubenswrapper[4740]: E1014 13:25:21.591929 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2\": container with ID starting with f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2 not found: ID does not exist" containerID="f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2" Oct 14 13:25:21.592087 master-1 kubenswrapper[4740]: I1014 13:25:21.591981 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2"} err="failed to get container status \"f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2\": rpc error: code = NotFound desc = could not find container \"f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2\": container with ID starting with f571e66510ddebd284c25bcebdc28c566db35d758d17c0253c4c618ef3ef55e2 not found: ID does not exist" Oct 14 13:25:21.698808 master-1 kubenswrapper[4740]: I1014 13:25:21.698719 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-77d8f866f9-skvf6"] Oct 14 13:25:21.809728 master-1 kubenswrapper[4740]: I1014 13:25:21.809657 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-77d8f866f9-skvf6"] Oct 14 13:25:21.830691 master-1 kubenswrapper[4740]: I1014 13:25:21.830524 4740 scope.go:117] "RemoveContainer" containerID="516862ae041aab7390f584c0cbf3cdf2154c45cbdb2591237446bb7d27696ed4" Oct 14 13:25:21.847653 master-1 kubenswrapper[4740]: I1014 13:25:21.847605 4740 scope.go:117] "RemoveContainer" containerID="410d42ad1c03831b0b0e58b34e9c7c20fbce91f19d06aca1df997680840d4c82" Oct 14 13:25:21.872364 master-1 kubenswrapper[4740]: I1014 13:25:21.868919 4740 scope.go:117] "RemoveContainer" containerID="54f46dc9ca357d24aa0d18e8d5db0aee69d6d73cc41e66f9af2ffdab2e4b7cc3" Oct 14 13:25:21.887663 master-1 kubenswrapper[4740]: I1014 13:25:21.887523 4740 scope.go:117] "RemoveContainer" containerID="84816b63a679d0da082379c16b62aec3006ff768247ca2c54217f373f103c8e1" Oct 14 13:25:22.955162 master-1 kubenswrapper[4740]: I1014 13:25:22.955111 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe87fbd6-00fb-4304-b1c8-70ff91c6b278" path="/var/lib/kubelet/pods/fe87fbd6-00fb-4304-b1c8-70ff91c6b278/volumes" Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: I1014 13:25:25.741911 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:25.741979 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:25.743764 master-1 kubenswrapper[4740]: I1014 13:25:25.741994 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: I1014 13:25:30.745552 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]etcd excluded: ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]etcd-readiness excluded: ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartOAuthInformer ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:25:30.745662 master-1 kubenswrapper[4740]: I1014 13:25:30.745652 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:25:35.679551 master-1 kubenswrapper[4740]: I1014 13:25:35.679464 4740 generic.go:334] "Generic (PLEG): container finished" podID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerID="f2e2740652494e2a8601bb964a94737bdc249abe23a6463336f3a8b42bda2bba" exitCode=0 Oct 14 13:25:35.679551 master-1 kubenswrapper[4740]: I1014 13:25:35.679542 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" event={"ID":"d5a933b7-cba6-4bb3-9529-918d06be4da7","Type":"ContainerDied","Data":"f2e2740652494e2a8601bb964a94737bdc249abe23a6463336f3a8b42bda2bba"} Oct 14 13:25:35.738059 master-1 kubenswrapper[4740]: I1014 13:25:35.737919 4740 patch_prober.go:28] interesting pod/apiserver-84c8b8d745-j8fqz container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.128.0.96:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Oct 14 13:25:35.738372 master-1 kubenswrapper[4740]: I1014 13:25:35.738059 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.96:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.96:8443: connect: connection refused" Oct 14 13:25:36.141342 master-1 kubenswrapper[4740]: I1014 13:25:36.141079 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:25:36.191880 master-1 kubenswrapper[4740]: I1014 13:25:36.191786 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-g299n"] Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: E1014 13:25:36.192109 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: I1014 13:25:36.192125 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: E1014 13:25:36.192139 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="fix-audit-permissions" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: I1014 13:25:36.192147 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="fix-audit-permissions" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: E1014 13:25:36.192158 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c68b53-ad48-4681-9146-e0221d3f080e" containerName="installer" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: I1014 13:25:36.192165 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c68b53-ad48-4681-9146-e0221d3f080e" containerName="installer" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: E1014 13:25:36.192177 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe87fbd6-00fb-4304-b1c8-70ff91c6b278" containerName="console" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: I1014 13:25:36.192184 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe87fbd6-00fb-4304-b1c8-70ff91c6b278" containerName="console" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: E1014 13:25:36.192200 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" Oct 14 13:25:36.192265 master-1 kubenswrapper[4740]: I1014 13:25:36.192206 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" Oct 14 13:25:36.192582 master-1 kubenswrapper[4740]: I1014 13:25:36.192321 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" containerName="oauth-apiserver" Oct 14 13:25:36.192582 master-1 kubenswrapper[4740]: I1014 13:25:36.192334 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef43de0-1319-41d0-9ca4-d4795c56c459" containerName="metrics-server" Oct 14 13:25:36.192582 master-1 kubenswrapper[4740]: I1014 13:25:36.192344 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe87fbd6-00fb-4304-b1c8-70ff91c6b278" containerName="console" Oct 14 13:25:36.192582 master-1 kubenswrapper[4740]: I1014 13:25:36.192353 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c68b53-ad48-4681-9146-e0221d3f080e" containerName="installer" Oct 14 13:25:36.193116 master-1 kubenswrapper[4740]: I1014 13:25:36.193081 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.215625 master-1 kubenswrapper[4740]: I1014 13:25:36.215544 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-g299n"] Oct 14 13:25:36.246082 master-1 kubenswrapper[4740]: I1014 13:25:36.246012 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-serving-cert\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246450 master-1 kubenswrapper[4740]: I1014 13:25:36.246149 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-policies\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246450 master-1 kubenswrapper[4740]: I1014 13:25:36.246195 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-dir\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246450 master-1 kubenswrapper[4740]: I1014 13:25:36.246268 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-client\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246450 master-1 kubenswrapper[4740]: I1014 13:25:36.246325 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:25:36.246450 master-1 kubenswrapper[4740]: I1014 13:25:36.246340 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf4qh\" (UniqueName: \"kubernetes.io/projected/d5a933b7-cba6-4bb3-9529-918d06be4da7-kube-api-access-rf4qh\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246450 master-1 kubenswrapper[4740]: I1014 13:25:36.246427 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-trusted-ca-bundle\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246709 master-1 kubenswrapper[4740]: I1014 13:25:36.246476 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-serving-ca\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246709 master-1 kubenswrapper[4740]: I1014 13:25:36.246517 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-encryption-config\") pod \"d5a933b7-cba6-4bb3-9529-918d06be4da7\" (UID: \"d5a933b7-cba6-4bb3-9529-918d06be4da7\") " Oct 14 13:25:36.246842 master-1 kubenswrapper[4740]: I1014 13:25:36.246792 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:36.247449 master-1 kubenswrapper[4740]: I1014 13:25:36.247404 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-policies\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.247517 master-1 kubenswrapper[4740]: I1014 13:25:36.247447 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5a933b7-cba6-4bb3-9529-918d06be4da7-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.248086 master-1 kubenswrapper[4740]: I1014 13:25:36.248040 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:36.248086 master-1 kubenswrapper[4740]: I1014 13:25:36.248054 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:25:36.250047 master-1 kubenswrapper[4740]: I1014 13:25:36.250010 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:36.250836 master-1 kubenswrapper[4740]: I1014 13:25:36.250803 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a933b7-cba6-4bb3-9529-918d06be4da7-kube-api-access-rf4qh" (OuterVolumeSpecName: "kube-api-access-rf4qh") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "kube-api-access-rf4qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:25:36.250913 master-1 kubenswrapper[4740]: I1014 13:25:36.250804 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:36.251061 master-1 kubenswrapper[4740]: I1014 13:25:36.251021 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5a933b7-cba6-4bb3-9529-918d06be4da7" (UID: "d5a933b7-cba6-4bb3-9529-918d06be4da7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:25:36.348698 master-1 kubenswrapper[4740]: I1014 13:25:36.348531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-encryption-config\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.348698 master-1 kubenswrapper[4740]: I1014 13:25:36.348583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-audit-policies\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349410 master-1 kubenswrapper[4740]: I1014 13:25:36.348912 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-serving-cert\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349410 master-1 kubenswrapper[4740]: I1014 13:25:36.349010 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-996ls\" (UniqueName: \"kubernetes.io/projected/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-kube-api-access-996ls\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349410 master-1 kubenswrapper[4740]: I1014 13:25:36.349094 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-etcd-serving-ca\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349548 master-1 kubenswrapper[4740]: I1014 13:25:36.349423 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-audit-dir\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349548 master-1 kubenswrapper[4740]: I1014 13:25:36.349528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-trusted-ca-bundle\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349695 master-1 kubenswrapper[4740]: I1014 13:25:36.349621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-etcd-client\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.349835 master-1 kubenswrapper[4740]: I1014 13:25:36.349812 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-serving-cert\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.349892 master-1 kubenswrapper[4740]: I1014 13:25:36.349842 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-client\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.349892 master-1 kubenswrapper[4740]: I1014 13:25:36.349860 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf4qh\" (UniqueName: \"kubernetes.io/projected/d5a933b7-cba6-4bb3-9529-918d06be4da7-kube-api-access-rf4qh\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.349892 master-1 kubenswrapper[4740]: I1014 13:25:36.349878 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-trusted-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.350000 master-1 kubenswrapper[4740]: I1014 13:25:36.349891 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5a933b7-cba6-4bb3-9529-918d06be4da7-etcd-serving-ca\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.350000 master-1 kubenswrapper[4740]: I1014 13:25:36.349907 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5a933b7-cba6-4bb3-9529-918d06be4da7-encryption-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:36.451419 master-1 kubenswrapper[4740]: I1014 13:25:36.451309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-serving-cert\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451419 master-1 kubenswrapper[4740]: I1014 13:25:36.451405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-996ls\" (UniqueName: \"kubernetes.io/projected/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-kube-api-access-996ls\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451872 master-1 kubenswrapper[4740]: I1014 13:25:36.451477 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-etcd-serving-ca\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451872 master-1 kubenswrapper[4740]: I1014 13:25:36.451556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-audit-dir\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451872 master-1 kubenswrapper[4740]: I1014 13:25:36.451601 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-trusted-ca-bundle\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451872 master-1 kubenswrapper[4740]: I1014 13:25:36.451645 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-etcd-client\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451872 master-1 kubenswrapper[4740]: I1014 13:25:36.451689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-encryption-config\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.451872 master-1 kubenswrapper[4740]: I1014 13:25:36.451725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-audit-policies\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.452345 master-1 kubenswrapper[4740]: I1014 13:25:36.452277 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-audit-dir\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.454369 master-1 kubenswrapper[4740]: I1014 13:25:36.452555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-trusted-ca-bundle\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.454369 master-1 kubenswrapper[4740]: I1014 13:25:36.453144 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-audit-policies\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.454369 master-1 kubenswrapper[4740]: I1014 13:25:36.454299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-etcd-serving-ca\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.455135 master-1 kubenswrapper[4740]: I1014 13:25:36.455057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-serving-cert\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.457571 master-1 kubenswrapper[4740]: I1014 13:25:36.457495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-encryption-config\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.458173 master-1 kubenswrapper[4740]: I1014 13:25:36.458072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-etcd-client\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.471857 master-1 kubenswrapper[4740]: I1014 13:25:36.471796 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-996ls\" (UniqueName: \"kubernetes.io/projected/b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd-kube-api-access-996ls\") pod \"apiserver-7b6784d654-g299n\" (UID: \"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd\") " pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.510274 master-1 kubenswrapper[4740]: I1014 13:25:36.510125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:36.692688 master-1 kubenswrapper[4740]: I1014 13:25:36.692591 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" event={"ID":"d5a933b7-cba6-4bb3-9529-918d06be4da7","Type":"ContainerDied","Data":"d56e9fef9fbecfda134ec0e5c15a1d4b21911a3ca69b963035ea391519bf2368"} Oct 14 13:25:36.692688 master-1 kubenswrapper[4740]: I1014 13:25:36.692681 4740 scope.go:117] "RemoveContainer" containerID="f2e2740652494e2a8601bb964a94737bdc249abe23a6463336f3a8b42bda2bba" Oct 14 13:25:36.693637 master-1 kubenswrapper[4740]: I1014 13:25:36.692764 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz" Oct 14 13:25:36.722079 master-1 kubenswrapper[4740]: I1014 13:25:36.722000 4740 scope.go:117] "RemoveContainer" containerID="873fa7a6daf094c261cd142cbf648252d7f1dacb06fa63c2b1dfc1d8529c4c70" Oct 14 13:25:36.753724 master-1 kubenswrapper[4740]: I1014 13:25:36.749824 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz"] Oct 14 13:25:36.761552 master-1 kubenswrapper[4740]: I1014 13:25:36.761435 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz"] Oct 14 13:25:36.956990 master-1 kubenswrapper[4740]: I1014 13:25:36.956799 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a933b7-cba6-4bb3-9529-918d06be4da7" path="/var/lib/kubelet/pods/d5a933b7-cba6-4bb3-9529-918d06be4da7/volumes" Oct 14 13:25:37.024640 master-1 kubenswrapper[4740]: I1014 13:25:37.023939 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b6784d654-g299n"] Oct 14 13:25:37.038387 master-1 kubenswrapper[4740]: W1014 13:25:37.035003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8edbc3a_5f27_44fb_bb3a_d35557ffc3bd.slice/crio-1acdeef224bad288852de34ffde805c0fb0e4482f0b41c846c27326d58264b29 WatchSource:0}: Error finding container 1acdeef224bad288852de34ffde805c0fb0e4482f0b41c846c27326d58264b29: Status 404 returned error can't find the container with id 1acdeef224bad288852de34ffde805c0fb0e4482f0b41c846c27326d58264b29 Oct 14 13:25:37.704415 master-1 kubenswrapper[4740]: I1014 13:25:37.704161 4740 generic.go:334] "Generic (PLEG): container finished" podID="b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd" containerID="d8c1729002cdd45f0e26e0f1186cf6e1c50824b17c7c3900ecd968ad95bbd221" exitCode=0 Oct 14 13:25:37.704415 master-1 kubenswrapper[4740]: I1014 13:25:37.704316 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" event={"ID":"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd","Type":"ContainerDied","Data":"d8c1729002cdd45f0e26e0f1186cf6e1c50824b17c7c3900ecd968ad95bbd221"} Oct 14 13:25:37.704415 master-1 kubenswrapper[4740]: I1014 13:25:37.704390 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" event={"ID":"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd","Type":"ContainerStarted","Data":"1acdeef224bad288852de34ffde805c0fb0e4482f0b41c846c27326d58264b29"} Oct 14 13:25:38.714842 master-1 kubenswrapper[4740]: I1014 13:25:38.714793 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" event={"ID":"b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd","Type":"ContainerStarted","Data":"c54bb7bee6ec798e93f86af18ef77b0390f9af852d0865a27044b60fe0060a06"} Oct 14 13:25:38.759915 master-1 kubenswrapper[4740]: I1014 13:25:38.759810 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" podStartSLOduration=54.759784253 podStartE2EDuration="54.759784253s" podCreationTimestamp="2025-10-14 13:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:25:38.75197106 +0000 UTC m=+1164.562260409" watchObservedRunningTime="2025-10-14 13:25:38.759784253 +0000 UTC m=+1164.570073592" Oct 14 13:25:41.513087 master-1 kubenswrapper[4740]: I1014 13:25:41.510559 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:41.513087 master-1 kubenswrapper[4740]: I1014 13:25:41.510639 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:41.528175 master-1 kubenswrapper[4740]: I1014 13:25:41.528111 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:41.750483 master-1 kubenswrapper[4740]: I1014 13:25:41.750412 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7b6784d654-g299n" Oct 14 13:25:51.921756 master-1 kubenswrapper[4740]: I1014 13:25:51.921658 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:25:51.922807 master-1 kubenswrapper[4740]: I1014 13:25:51.922274 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" containerID="cri-o://6deb61510a50f20d8a9f8067be1b2fc90640db24c7a18c642c99fb75420a3916" gracePeriod=30 Oct 14 13:25:51.922807 master-1 kubenswrapper[4740]: I1014 13:25:51.922342 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" containerID="cri-o://38b057ae8b40d687f60b71f7fba2f8022d9c13a14d7ce7d0dc5582d37e59a6b0" gracePeriod=30 Oct 14 13:25:51.922807 master-1 kubenswrapper[4740]: I1014 13:25:51.922422 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" containerID="cri-o://8ccc55e8766de0b5ea595b51afd74c9ee750d77dbab2d822a06ca94d46f0d682" gracePeriod=30 Oct 14 13:25:51.922807 master-1 kubenswrapper[4740]: I1014 13:25:51.922586 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" containerID="cri-o://12e4d73d95a7dc18b338e89f6b04f58e4c4375db44a191a85f3a89f8fd4875aa" gracePeriod=30 Oct 14 13:25:51.922807 master-1 kubenswrapper[4740]: I1014 13:25:51.922544 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-1" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" containerID="cri-o://d15297c41202b9d0b9c85f5d1690476d1f865b7ea28526de3a3203a97bfd1c48" gracePeriod=30 Oct 14 13:25:51.926757 master-1 kubenswrapper[4740]: I1014 13:25:51.926679 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:25:51.927180 master-1 kubenswrapper[4740]: E1014 13:25:51.927126 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="setup" Oct 14 13:25:51.927180 master-1 kubenswrapper[4740]: I1014 13:25:51.927173 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="setup" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: E1014 13:25:51.927212 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-ensure-env-vars" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: I1014 13:25:51.927264 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-ensure-env-vars" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: E1014 13:25:51.927284 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: I1014 13:25:51.927300 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: E1014 13:25:51.927316 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: I1014 13:25:51.927332 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: E1014 13:25:51.927359 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: I1014 13:25:51.927375 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" Oct 14 13:25:51.927400 master-1 kubenswrapper[4740]: E1014 13:25:51.927403 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-resources-copy" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.927420 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-resources-copy" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: E1014 13:25:51.927448 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.927463 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: E1014 13:25:51.927480 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.927637 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.928329 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.928377 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-rev" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.928411 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-readyz" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.928433 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcd-metrics" Oct 14 13:25:51.928978 master-1 kubenswrapper[4740]: I1014 13:25:51.928518 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" containerName="etcdctl" Oct 14 13:25:52.052597 master-1 kubenswrapper[4740]: I1014 13:25:52.052527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-cert-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.052597 master-1 kubenswrapper[4740]: I1014 13:25:52.052602 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-usr-local-bin\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.052901 master-1 kubenswrapper[4740]: I1014 13:25:52.052652 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-data-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.052901 master-1 kubenswrapper[4740]: I1014 13:25:52.052678 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-static-pod-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.052901 master-1 kubenswrapper[4740]: I1014 13:25:52.052726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-resource-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.052901 master-1 kubenswrapper[4740]: I1014 13:25:52.052772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-log-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.157878 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-cert-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.157985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-usr-local-bin\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.158029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-cert-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.158102 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-data-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.158131 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-usr-local-bin\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.158172 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-static-pod-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.158274 master-1 kubenswrapper[4740]: I1014 13:25:52.158190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-data-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.159042 master-1 kubenswrapper[4740]: I1014 13:25:52.158343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-resource-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.159042 master-1 kubenswrapper[4740]: I1014 13:25:52.158416 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-log-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.159042 master-1 kubenswrapper[4740]: I1014 13:25:52.158493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-resource-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.159042 master-1 kubenswrapper[4740]: I1014 13:25:52.158628 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-log-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.160865 master-1 kubenswrapper[4740]: I1014 13:25:52.160046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/dbeb1098f6b7e52b91afcf2e9b50b014-static-pod-dir\") pod \"etcd-master-1\" (UID: \"dbeb1098f6b7e52b91afcf2e9b50b014\") " pod="openshift-etcd/etcd-master-1" Oct 14 13:25:52.827956 master-1 kubenswrapper[4740]: I1014 13:25:52.827845 4740 generic.go:334] "Generic (PLEG): container finished" podID="cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" containerID="c4889899325fa0123ff00f8c9f15c55e1a001211422e38ff28c0cfa66549f17e" exitCode=0 Oct 14 13:25:52.827956 master-1 kubenswrapper[4740]: I1014 13:25:52.827951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf","Type":"ContainerDied","Data":"c4889899325fa0123ff00f8c9f15c55e1a001211422e38ff28c0cfa66549f17e"} Oct 14 13:25:52.830167 master-1 kubenswrapper[4740]: I1014 13:25:52.830124 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 14 13:25:52.831248 master-1 kubenswrapper[4740]: I1014 13:25:52.831176 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 14 13:25:52.833742 master-1 kubenswrapper[4740]: I1014 13:25:52.833701 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="38b057ae8b40d687f60b71f7fba2f8022d9c13a14d7ce7d0dc5582d37e59a6b0" exitCode=2 Oct 14 13:25:52.833742 master-1 kubenswrapper[4740]: I1014 13:25:52.833732 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="12e4d73d95a7dc18b338e89f6b04f58e4c4375db44a191a85f3a89f8fd4875aa" exitCode=0 Oct 14 13:25:52.833742 master-1 kubenswrapper[4740]: I1014 13:25:52.833739 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="8ccc55e8766de0b5ea595b51afd74c9ee750d77dbab2d822a06ca94d46f0d682" exitCode=2 Oct 14 13:25:52.893057 master-1 kubenswrapper[4740]: I1014 13:25:52.892936 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 14 13:25:54.148160 master-1 kubenswrapper[4740]: I1014 13:25:54.148031 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 14 13:25:54.286995 master-1 kubenswrapper[4740]: I1014 13:25:54.286920 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kube-api-access\") pod \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " Oct 14 13:25:54.287284 master-1 kubenswrapper[4740]: I1014 13:25:54.287017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-var-lock\") pod \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " Oct 14 13:25:54.287284 master-1 kubenswrapper[4740]: I1014 13:25:54.287145 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kubelet-dir\") pod \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\" (UID: \"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf\") " Oct 14 13:25:54.287398 master-1 kubenswrapper[4740]: I1014 13:25:54.287345 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-var-lock" (OuterVolumeSpecName: "var-lock") pod "cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" (UID: "cb24e814-5147-4bab-a2ac-0fa7b97b5ecf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:25:54.287477 master-1 kubenswrapper[4740]: I1014 13:25:54.287439 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" (UID: "cb24e814-5147-4bab-a2ac-0fa7b97b5ecf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:25:54.287918 master-1 kubenswrapper[4740]: I1014 13:25:54.287878 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:54.299485 master-1 kubenswrapper[4740]: I1014 13:25:54.299417 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" (UID: "cb24e814-5147-4bab-a2ac-0fa7b97b5ecf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:25:54.303050 master-1 kubenswrapper[4740]: I1014 13:25:54.302984 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:25:54.303124 master-1 kubenswrapper[4740]: I1014 13:25:54.303072 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:25:54.395849 master-1 kubenswrapper[4740]: I1014 13:25:54.395674 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:54.395849 master-1 kubenswrapper[4740]: I1014 13:25:54.395713 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb24e814-5147-4bab-a2ac-0fa7b97b5ecf-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:25:54.849359 master-1 kubenswrapper[4740]: I1014 13:25:54.849253 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-10-master-1" event={"ID":"cb24e814-5147-4bab-a2ac-0fa7b97b5ecf","Type":"ContainerDied","Data":"a794ebe78f12315605d06b3ce426051a6431f1ae1a3d07468a6ec2a86ebc702a"} Oct 14 13:25:54.849359 master-1 kubenswrapper[4740]: I1014 13:25:54.849318 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a794ebe78f12315605d06b3ce426051a6431f1ae1a3d07468a6ec2a86ebc702a" Oct 14 13:25:54.849810 master-1 kubenswrapper[4740]: I1014 13:25:54.849443 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-10-master-1" Oct 14 13:25:59.303020 master-1 kubenswrapper[4740]: I1014 13:25:59.302906 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:25:59.303020 master-1 kubenswrapper[4740]: I1014 13:25:59.303002 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:04.303687 master-1 kubenswrapper[4740]: I1014 13:26:04.303599 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:04.303687 master-1 kubenswrapper[4740]: I1014 13:26:04.303671 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:04.304846 master-1 kubenswrapper[4740]: I1014 13:26:04.303776 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:26:04.304846 master-1 kubenswrapper[4740]: I1014 13:26:04.304303 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:04.304846 master-1 kubenswrapper[4740]: I1014 13:26:04.304380 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:09.303885 master-1 kubenswrapper[4740]: I1014 13:26:09.303778 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:09.303885 master-1 kubenswrapper[4740]: I1014 13:26:09.303865 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:14.303096 master-1 kubenswrapper[4740]: I1014 13:26:14.303047 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:14.303647 master-1 kubenswrapper[4740]: I1014 13:26:14.303109 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:19.303495 master-1 kubenswrapper[4740]: I1014 13:26:19.303396 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:19.306069 master-1 kubenswrapper[4740]: I1014 13:26:19.303510 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:22.059585 master-1 kubenswrapper[4740]: I1014 13:26:22.059451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 14 13:26:22.060838 master-1 kubenswrapper[4740]: I1014 13:26:22.060800 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 14 13:26:22.061435 master-1 kubenswrapper[4740]: I1014 13:26:22.061402 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd/0.log" Oct 14 13:26:22.062186 master-1 kubenswrapper[4740]: I1014 13:26:22.061977 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcdctl/0.log" Oct 14 13:26:22.063881 master-1 kubenswrapper[4740]: I1014 13:26:22.063806 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="d15297c41202b9d0b9c85f5d1690476d1f865b7ea28526de3a3203a97bfd1c48" exitCode=137 Oct 14 13:26:22.063881 master-1 kubenswrapper[4740]: I1014 13:26:22.063852 4740 generic.go:334] "Generic (PLEG): container finished" podID="2b1859aa05c2c75eb43d086c9ccd9c86" containerID="6deb61510a50f20d8a9f8067be1b2fc90640db24c7a18c642c99fb75420a3916" exitCode=137 Oct 14 13:26:22.520670 master-1 kubenswrapper[4740]: I1014 13:26:22.520594 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 14 13:26:22.521846 master-1 kubenswrapper[4740]: I1014 13:26:22.521790 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 14 13:26:22.522560 master-1 kubenswrapper[4740]: I1014 13:26:22.522515 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd/0.log" Oct 14 13:26:22.523092 master-1 kubenswrapper[4740]: I1014 13:26:22.523042 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcdctl/0.log" Oct 14 13:26:22.524589 master-1 kubenswrapper[4740]: I1014 13:26:22.524548 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:22.531171 master-1 kubenswrapper[4740]: I1014 13:26:22.531031 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 14 13:26:22.625756 master-1 kubenswrapper[4740]: I1014 13:26:22.625636 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 14 13:26:22.625756 master-1 kubenswrapper[4740]: I1014 13:26:22.625735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625820 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625841 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625897 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625928 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") pod \"2b1859aa05c2c75eb43d086c9ccd9c86\" (UID: \"2b1859aa05c2c75eb43d086c9ccd9c86\") " Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625911 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625953 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir" (OuterVolumeSpecName: "data-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.625996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:22.626095 master-1 kubenswrapper[4740]: I1014 13:26:22.626005 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir" (OuterVolumeSpecName: "log-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "2b1859aa05c2c75eb43d086c9ccd9c86" (UID: "2b1859aa05c2c75eb43d086c9ccd9c86"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626270 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626294 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626307 4740 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-data-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626318 4740 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-usr-local-bin\") on node \"master-1\" DevicePath \"\"" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626330 4740 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-log-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:26:22.626539 master-1 kubenswrapper[4740]: I1014 13:26:22.626341 4740 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2b1859aa05c2c75eb43d086c9ccd9c86-static-pod-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:26:22.956439 master-1 kubenswrapper[4740]: I1014 13:26:22.956359 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1859aa05c2c75eb43d086c9ccd9c86" path="/var/lib/kubelet/pods/2b1859aa05c2c75eb43d086c9ccd9c86/volumes" Oct 14 13:26:23.074944 master-1 kubenswrapper[4740]: I1014 13:26:23.074847 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-rev/0.log" Oct 14 13:26:23.076655 master-1 kubenswrapper[4740]: I1014 13:26:23.076595 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd-metrics/0.log" Oct 14 13:26:23.078099 master-1 kubenswrapper[4740]: I1014 13:26:23.078053 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcd/0.log" Oct 14 13:26:23.078780 master-1 kubenswrapper[4740]: I1014 13:26:23.078729 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_2b1859aa05c2c75eb43d086c9ccd9c86/etcdctl/0.log" Oct 14 13:26:23.080718 master-1 kubenswrapper[4740]: I1014 13:26:23.080662 4740 scope.go:117] "RemoveContainer" containerID="38b057ae8b40d687f60b71f7fba2f8022d9c13a14d7ce7d0dc5582d37e59a6b0" Oct 14 13:26:23.080943 master-1 kubenswrapper[4740]: I1014 13:26:23.080887 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:23.089656 master-1 kubenswrapper[4740]: I1014 13:26:23.089572 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-etcd/etcd-master-1" oldPodUID="2b1859aa05c2c75eb43d086c9ccd9c86" podUID="dbeb1098f6b7e52b91afcf2e9b50b014" Oct 14 13:26:23.100418 master-1 kubenswrapper[4740]: I1014 13:26:23.100362 4740 scope.go:117] "RemoveContainer" containerID="12e4d73d95a7dc18b338e89f6b04f58e4c4375db44a191a85f3a89f8fd4875aa" Oct 14 13:26:23.119470 master-1 kubenswrapper[4740]: I1014 13:26:23.119422 4740 scope.go:117] "RemoveContainer" containerID="8ccc55e8766de0b5ea595b51afd74c9ee750d77dbab2d822a06ca94d46f0d682" Oct 14 13:26:23.141922 master-1 kubenswrapper[4740]: I1014 13:26:23.141751 4740 scope.go:117] "RemoveContainer" containerID="d15297c41202b9d0b9c85f5d1690476d1f865b7ea28526de3a3203a97bfd1c48" Oct 14 13:26:23.166336 master-1 kubenswrapper[4740]: I1014 13:26:23.166275 4740 scope.go:117] "RemoveContainer" containerID="6deb61510a50f20d8a9f8067be1b2fc90640db24c7a18c642c99fb75420a3916" Oct 14 13:26:23.191116 master-1 kubenswrapper[4740]: I1014 13:26:23.191028 4740 scope.go:117] "RemoveContainer" containerID="11ca14a2e498d959bad210f3614e1233732965efc52aed100074f0c18857fa17" Oct 14 13:26:23.211027 master-1 kubenswrapper[4740]: I1014 13:26:23.210926 4740 scope.go:117] "RemoveContainer" containerID="a428a767276dd7199fd91dd5f2f6673a06e9529e326ebf71716ff52e3c752eb8" Oct 14 13:26:23.249580 master-1 kubenswrapper[4740]: I1014 13:26:23.249542 4740 scope.go:117] "RemoveContainer" containerID="579299c374d3e90207fed9d0ac7add539c5bee12f49cbd11da0109e242ed4ca2" Oct 14 13:26:24.303581 master-1 kubenswrapper[4740]: I1014 13:26:24.303464 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:24.304526 master-1 kubenswrapper[4740]: I1014 13:26:24.303602 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:26.943207 master-1 kubenswrapper[4740]: I1014 13:26:26.943122 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:26.966755 master-1 kubenswrapper[4740]: I1014 13:26:26.966702 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-1" podUID="b4eb79b1-8b6b-438f-87d6-d9ba16fe8530" Oct 14 13:26:26.966874 master-1 kubenswrapper[4740]: I1014 13:26:26.966830 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-1" podUID="b4eb79b1-8b6b-438f-87d6-d9ba16fe8530" Oct 14 13:26:26.998863 master-1 kubenswrapper[4740]: I1014 13:26:26.998081 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:27.022365 master-1 kubenswrapper[4740]: I1014 13:26:27.021300 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:26:27.031172 master-1 kubenswrapper[4740]: I1014 13:26:27.031093 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:27.043876 master-1 kubenswrapper[4740]: I1014 13:26:27.043770 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:26:27.048376 master-1 kubenswrapper[4740]: I1014 13:26:27.048308 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-1"] Oct 14 13:26:27.111469 master-1 kubenswrapper[4740]: I1014 13:26:27.111411 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"78f5e3d101ecd4f8867b3511bf5d660a534e8446c70fb5a8630169c9398581ee"} Oct 14 13:26:28.119547 master-1 kubenswrapper[4740]: I1014 13:26:28.119503 4740 generic.go:334] "Generic (PLEG): container finished" podID="dbeb1098f6b7e52b91afcf2e9b50b014" containerID="9bd6e109c652d9ebc95ae119f68046c7c063d313afe389c97a28b7580a820e38" exitCode=0 Oct 14 13:26:28.120124 master-1 kubenswrapper[4740]: I1014 13:26:28.119565 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerDied","Data":"9bd6e109c652d9ebc95ae119f68046c7c063d313afe389c97a28b7580a820e38"} Oct 14 13:26:29.130089 master-1 kubenswrapper[4740]: I1014 13:26:29.130007 4740 generic.go:334] "Generic (PLEG): container finished" podID="dbeb1098f6b7e52b91afcf2e9b50b014" containerID="20b10be86a26580a30b7208d8b8a27dfb2ef63a3380a06ffea4a50adefc5a88e" exitCode=0 Oct 14 13:26:29.130089 master-1 kubenswrapper[4740]: I1014 13:26:29.130086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerDied","Data":"20b10be86a26580a30b7208d8b8a27dfb2ef63a3380a06ffea4a50adefc5a88e"} Oct 14 13:26:29.302772 master-1 kubenswrapper[4740]: I1014 13:26:29.302718 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" start-of-body= Oct 14 13:26:29.302885 master-1 kubenswrapper[4740]: I1014 13:26:29.302777 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": dial tcp 192.168.34.11:9980: connect: connection refused" Oct 14 13:26:30.142210 master-1 kubenswrapper[4740]: I1014 13:26:30.142136 4740 generic.go:334] "Generic (PLEG): container finished" podID="dbeb1098f6b7e52b91afcf2e9b50b014" containerID="9692083e4b82a9a67b22b7520c8dc76265bbc6c67f7c4ad676baec0235829766" exitCode=0 Oct 14 13:26:30.142210 master-1 kubenswrapper[4740]: I1014 13:26:30.142213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerDied","Data":"9692083e4b82a9a67b22b7520c8dc76265bbc6c67f7c4ad676baec0235829766"} Oct 14 13:26:31.152380 master-1 kubenswrapper[4740]: I1014 13:26:31.152321 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"fdca7be21908bbefeb5b581b18c450e8d2c39d941d5b1cbf91cf42c812d7d742"} Oct 14 13:26:31.152380 master-1 kubenswrapper[4740]: I1014 13:26:31.152377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"64f520f3523207258e5a1c6c49a5b61e7dbc3d9d46e0fa5b82949baa7711020a"} Oct 14 13:26:31.152380 master-1 kubenswrapper[4740]: I1014 13:26:31.152390 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"61989b3617b22ae742ec847925c3323ae749d440288868019616aabf3b3e2efd"} Oct 14 13:26:32.170329 master-1 kubenswrapper[4740]: I1014 13:26:32.170271 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"0ba1634bdbd222d04644e26dd7bcd0518d82c2b729918be28b6aee3e550b4774"} Oct 14 13:26:32.171171 master-1 kubenswrapper[4740]: I1014 13:26:32.170951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-1" event={"ID":"dbeb1098f6b7e52b91afcf2e9b50b014","Type":"ContainerStarted","Data":"2b1a5206081dd4c5a31cb0000b3ad9ff60a1db8c91cbdf8a021902a8575888da"} Oct 14 13:26:32.238354 master-1 kubenswrapper[4740]: I1014 13:26:32.238215 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-1" podStartSLOduration=5.238184502 podStartE2EDuration="5.238184502s" podCreationTimestamp="2025-10-14 13:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:26:32.234094653 +0000 UTC m=+1218.044384032" watchObservedRunningTime="2025-10-14 13:26:32.238184502 +0000 UTC m=+1218.048473871" Oct 14 13:26:37.032285 master-1 kubenswrapper[4740]: I1014 13:26:37.032173 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:37.033169 master-1 kubenswrapper[4740]: I1014 13:26:37.032313 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:39.303364 master-1 kubenswrapper[4740]: I1014 13:26:39.303218 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:26:39.303364 master-1 kubenswrapper[4740]: I1014 13:26:39.303344 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:26:44.303826 master-1 kubenswrapper[4740]: I1014 13:26:44.303643 4740 patch_prober.go:28] interesting pod/etcd-guard-master-1 container/guard namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:26:44.303826 master-1 kubenswrapper[4740]: I1014 13:26:44.303768 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-guard-master-1" podUID="e4b81afc-7eb3-4303-91f8-593c130da282" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:26:44.845024 master-1 kubenswrapper[4740]: I1014 13:26:44.844933 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-guard-master-1" Oct 14 13:26:47.049955 master-1 kubenswrapper[4740]: I1014 13:26:47.049861 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-1" Oct 14 13:26:47.074089 master-1 kubenswrapper[4740]: I1014 13:26:47.074014 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-1" Oct 14 13:28:10.094099 master-1 kubenswrapper[4740]: I1014 13:28:10.093848 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-mzrkb_ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67/assisted-installer-controller/0.log" Oct 14 13:28:22.030521 master-1 kubenswrapper[4740]: I1014 13:28:22.030219 4740 scope.go:117] "RemoveContainer" containerID="38eaa2b002f57fd158787266306bcacdb5e72b8d03c630b6fdb586b70cd5b78c" Oct 14 13:28:22.049140 master-1 kubenswrapper[4740]: I1014 13:28:22.049073 4740 scope.go:117] "RemoveContainer" containerID="9b65a048ae7111360fb7f1062f39927fa58d6a586b76d6fe08a7abd7c74df1f4" Oct 14 13:28:22.070562 master-1 kubenswrapper[4740]: I1014 13:28:22.070487 4740 scope.go:117] "RemoveContainer" containerID="3964865eda1440fe224070be2658bbefa239f5e54c4bda527ce7baa007443af6" Oct 14 13:28:22.097780 master-1 kubenswrapper[4740]: I1014 13:28:22.097125 4740 scope.go:117] "RemoveContainer" containerID="b61c1ab1ec698919e1b5cef271aec9037b0600ce60d4916637ddb3a39c701d95" Oct 14 13:28:39.431573 master-1 kubenswrapper[4740]: I1014 13:28:39.431480 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/revision-pruner-10-master-1"] Oct 14 13:28:39.432973 master-1 kubenswrapper[4740]: E1014 13:28:39.431820 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" containerName="installer" Oct 14 13:28:39.432973 master-1 kubenswrapper[4740]: I1014 13:28:39.431842 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" containerName="installer" Oct 14 13:28:39.432973 master-1 kubenswrapper[4740]: I1014 13:28:39.432075 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb24e814-5147-4bab-a2ac-0fa7b97b5ecf" containerName="installer" Oct 14 13:28:39.432973 master-1 kubenswrapper[4740]: I1014 13:28:39.432821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.436197 master-1 kubenswrapper[4740]: I1014 13:28:39.436111 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xbs2c" Oct 14 13:28:39.455940 master-1 kubenswrapper[4740]: I1014 13:28:39.455861 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-1"] Oct 14 13:28:39.574117 master-1 kubenswrapper[4740]: I1014 13:28:39.574033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf6b504-565c-4311-a44f-c7c9e6f03add-kube-api-access\") pod \"revision-pruner-10-master-1\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.574117 master-1 kubenswrapper[4740]: I1014 13:28:39.574087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b504-565c-4311-a44f-c7c9e6f03add-kubelet-dir\") pod \"revision-pruner-10-master-1\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.676132 master-1 kubenswrapper[4740]: I1014 13:28:39.676049 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf6b504-565c-4311-a44f-c7c9e6f03add-kube-api-access\") pod \"revision-pruner-10-master-1\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.676739 master-1 kubenswrapper[4740]: I1014 13:28:39.676150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b504-565c-4311-a44f-c7c9e6f03add-kubelet-dir\") pod \"revision-pruner-10-master-1\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.676739 master-1 kubenswrapper[4740]: I1014 13:28:39.676269 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b504-565c-4311-a44f-c7c9e6f03add-kubelet-dir\") pod \"revision-pruner-10-master-1\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.700090 master-1 kubenswrapper[4740]: I1014 13:28:39.699862 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf6b504-565c-4311-a44f-c7c9e6f03add-kube-api-access\") pod \"revision-pruner-10-master-1\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:39.758988 master-1 kubenswrapper[4740]: I1014 13:28:39.758881 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:40.255386 master-1 kubenswrapper[4740]: I1014 13:28:40.255208 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/revision-pruner-10-master-1"] Oct 14 13:28:41.154197 master-1 kubenswrapper[4740]: I1014 13:28:41.154126 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"0cf6b504-565c-4311-a44f-c7c9e6f03add","Type":"ContainerStarted","Data":"f32a0dcc559c97d47a949a488865eb3c0d1430e4ba8ec9af431d3cdb7201b244"} Oct 14 13:28:41.154197 master-1 kubenswrapper[4740]: I1014 13:28:41.154181 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"0cf6b504-565c-4311-a44f-c7c9e6f03add","Type":"ContainerStarted","Data":"d91d90274a4ac0147ce7ee1885db29401f1a302115fe24214b4e97f18fcc1739"} Oct 14 13:28:41.176476 master-1 kubenswrapper[4740]: I1014 13:28:41.176387 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/revision-pruner-10-master-1" podStartSLOduration=2.176367418 podStartE2EDuration="2.176367418s" podCreationTimestamp="2025-10-14 13:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:28:41.172941506 +0000 UTC m=+1346.983230845" watchObservedRunningTime="2025-10-14 13:28:41.176367418 +0000 UTC m=+1346.986656747" Oct 14 13:28:42.162610 master-1 kubenswrapper[4740]: I1014 13:28:42.162538 4740 generic.go:334] "Generic (PLEG): container finished" podID="0cf6b504-565c-4311-a44f-c7c9e6f03add" containerID="f32a0dcc559c97d47a949a488865eb3c0d1430e4ba8ec9af431d3cdb7201b244" exitCode=0 Oct 14 13:28:42.162610 master-1 kubenswrapper[4740]: I1014 13:28:42.162601 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"0cf6b504-565c-4311-a44f-c7c9e6f03add","Type":"ContainerDied","Data":"f32a0dcc559c97d47a949a488865eb3c0d1430e4ba8ec9af431d3cdb7201b244"} Oct 14 13:28:43.594363 master-1 kubenswrapper[4740]: I1014 13:28:43.593432 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:43.631723 master-1 kubenswrapper[4740]: I1014 13:28:43.631667 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b504-565c-4311-a44f-c7c9e6f03add-kubelet-dir\") pod \"0cf6b504-565c-4311-a44f-c7c9e6f03add\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " Oct 14 13:28:43.631941 master-1 kubenswrapper[4740]: I1014 13:28:43.631780 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf6b504-565c-4311-a44f-c7c9e6f03add-kube-api-access\") pod \"0cf6b504-565c-4311-a44f-c7c9e6f03add\" (UID: \"0cf6b504-565c-4311-a44f-c7c9e6f03add\") " Oct 14 13:28:43.631941 master-1 kubenswrapper[4740]: I1014 13:28:43.631781 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cf6b504-565c-4311-a44f-c7c9e6f03add-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0cf6b504-565c-4311-a44f-c7c9e6f03add" (UID: "0cf6b504-565c-4311-a44f-c7c9e6f03add"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:28:43.632067 master-1 kubenswrapper[4740]: I1014 13:28:43.631958 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b504-565c-4311-a44f-c7c9e6f03add-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:28:43.634646 master-1 kubenswrapper[4740]: I1014 13:28:43.634607 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf6b504-565c-4311-a44f-c7c9e6f03add-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0cf6b504-565c-4311-a44f-c7c9e6f03add" (UID: "0cf6b504-565c-4311-a44f-c7c9e6f03add"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:28:43.733329 master-1 kubenswrapper[4740]: I1014 13:28:43.733259 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cf6b504-565c-4311-a44f-c7c9e6f03add-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:28:44.179548 master-1 kubenswrapper[4740]: I1014 13:28:44.179443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/revision-pruner-10-master-1" event={"ID":"0cf6b504-565c-4311-a44f-c7c9e6f03add","Type":"ContainerDied","Data":"d91d90274a4ac0147ce7ee1885db29401f1a302115fe24214b4e97f18fcc1739"} Oct 14 13:28:44.179548 master-1 kubenswrapper[4740]: I1014 13:28:44.179492 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91d90274a4ac0147ce7ee1885db29401f1a302115fe24214b4e97f18fcc1739" Oct 14 13:28:44.179922 master-1 kubenswrapper[4740]: I1014 13:28:44.179590 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/revision-pruner-10-master-1" Oct 14 13:28:49.110960 master-1 kubenswrapper[4740]: I1014 13:28:49.110888 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 14 13:28:49.120578 master-1 kubenswrapper[4740]: I1014 13:28:49.120522 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-1-master-1"] Oct 14 13:28:50.952686 master-1 kubenswrapper[4740]: I1014 13:28:50.952565 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61b7a8e-e2be-4f11-a659-1919213dda51" path="/var/lib/kubelet/pods/b61b7a8e-e2be-4f11-a659-1919213dda51/volumes" Oct 14 13:29:16.311951 master-1 kubenswrapper[4740]: I1014 13:29:16.311884 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h"] Oct 14 13:29:16.312614 master-1 kubenswrapper[4740]: E1014 13:29:16.312135 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf6b504-565c-4311-a44f-c7c9e6f03add" containerName="pruner" Oct 14 13:29:16.312614 master-1 kubenswrapper[4740]: I1014 13:29:16.312147 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf6b504-565c-4311-a44f-c7c9e6f03add" containerName="pruner" Oct 14 13:29:16.312614 master-1 kubenswrapper[4740]: I1014 13:29:16.312264 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf6b504-565c-4311-a44f-c7c9e6f03add" containerName="pruner" Oct 14 13:29:16.312776 master-1 kubenswrapper[4740]: I1014 13:29:16.312693 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.315267 master-1 kubenswrapper[4740]: I1014 13:29:16.315201 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 14 13:29:16.338263 master-1 kubenswrapper[4740]: I1014 13:29:16.338168 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h"] Oct 14 13:29:16.409033 master-1 kubenswrapper[4740]: I1014 13:29:16.395858 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e540d3a-2514-4929-b37e-7b0908d2977e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h\" (UID: \"8e540d3a-2514-4929-b37e-7b0908d2977e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.409033 master-1 kubenswrapper[4740]: I1014 13:29:16.395931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e540d3a-2514-4929-b37e-7b0908d2977e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h\" (UID: \"8e540d3a-2514-4929-b37e-7b0908d2977e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.498462 master-1 kubenswrapper[4740]: I1014 13:29:16.498387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e540d3a-2514-4929-b37e-7b0908d2977e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h\" (UID: \"8e540d3a-2514-4929-b37e-7b0908d2977e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.498703 master-1 kubenswrapper[4740]: I1014 13:29:16.498466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e540d3a-2514-4929-b37e-7b0908d2977e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h\" (UID: \"8e540d3a-2514-4929-b37e-7b0908d2977e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.501348 master-1 kubenswrapper[4740]: I1014 13:29:16.501290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e540d3a-2514-4929-b37e-7b0908d2977e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h\" (UID: \"8e540d3a-2514-4929-b37e-7b0908d2977e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.501707 master-1 kubenswrapper[4740]: I1014 13:29:16.501671 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e540d3a-2514-4929-b37e-7b0908d2977e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h\" (UID: \"8e540d3a-2514-4929-b37e-7b0908d2977e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:16.628740 master-1 kubenswrapper[4740]: I1014 13:29:16.628582 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" Oct 14 13:29:17.077054 master-1 kubenswrapper[4740]: I1014 13:29:17.077008 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h"] Oct 14 13:29:17.078283 master-1 kubenswrapper[4740]: W1014 13:29:17.078244 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e540d3a_2514_4929_b37e_7b0908d2977e.slice/crio-fa333f9bca2dd80049663b5ae1198742a7bccb0e7e317c9a97464fb0cbeaefaa WatchSource:0}: Error finding container fa333f9bca2dd80049663b5ae1198742a7bccb0e7e317c9a97464fb0cbeaefaa: Status 404 returned error can't find the container with id fa333f9bca2dd80049663b5ae1198742a7bccb0e7e317c9a97464fb0cbeaefaa Oct 14 13:29:17.080578 master-1 kubenswrapper[4740]: I1014 13:29:17.080560 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:29:17.393458 master-1 kubenswrapper[4740]: I1014 13:29:17.393307 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" event={"ID":"8e540d3a-2514-4929-b37e-7b0908d2977e","Type":"ContainerStarted","Data":"fa333f9bca2dd80049663b5ae1198742a7bccb0e7e317c9a97464fb0cbeaefaa"} Oct 14 13:29:22.161794 master-1 kubenswrapper[4740]: I1014 13:29:22.161741 4740 scope.go:117] "RemoveContainer" containerID="f1ea437af65c58aa9a7defa07101efbb33a229bc2ca4bbc295be92bcd032e893" Oct 14 13:29:22.443098 master-1 kubenswrapper[4740]: I1014 13:29:22.442983 4740 generic.go:334] "Generic (PLEG): container finished" podID="ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" containerID="bc71a15d544001ec0327f2a718240b52ed1d0e11b63a81eddc56b5f9b5a7dd37" exitCode=0 Oct 14 13:29:22.443098 master-1 kubenswrapper[4740]: I1014 13:29:22.443032 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mzrkb" event={"ID":"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67","Type":"ContainerDied","Data":"bc71a15d544001ec0327f2a718240b52ed1d0e11b63a81eddc56b5f9b5a7dd37"} Oct 14 13:29:25.602187 master-1 kubenswrapper[4740]: I1014 13:29:25.602140 4740 scope.go:117] "RemoveContainer" containerID="9f41636be726016072c28ea80b0c3486ab89141361a1377e8eeffd48959d0e15" Oct 14 13:29:25.660481 master-1 kubenswrapper[4740]: I1014 13:29:25.660435 4740 scope.go:117] "RemoveContainer" containerID="5ac1218809d0fc572cfec08d0c990ed62a777d84382fd79cdbb8e11b45766b3d" Oct 14 13:29:25.667290 master-1 kubenswrapper[4740]: I1014 13:29:25.667260 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:29:25.685072 master-1 kubenswrapper[4740]: I1014 13:29:25.680098 4740 scope.go:117] "RemoveContainer" containerID="bf6d32c0ab07062e4cf2faa0fb3f11b49404272e70cf25e281d742b6cc15fdbe" Oct 14 13:29:25.704567 master-1 kubenswrapper[4740]: I1014 13:29:25.704528 4740 scope.go:117] "RemoveContainer" containerID="a836f0f0d731ba4ebc1d5f5e51a85585abeecbda30cc3a088b3ec77311ff5bed" Oct 14 13:29:25.831688 master-1 kubenswrapper[4740]: I1014 13:29:25.831644 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-ca-bundle\") pod \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " Oct 14 13:29:25.831865 master-1 kubenswrapper[4740]: I1014 13:29:25.831778 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pg7b\" (UniqueName: \"kubernetes.io/projected/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-kube-api-access-6pg7b\") pod \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " Oct 14 13:29:25.831865 master-1 kubenswrapper[4740]: I1014 13:29:25.831806 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-var-run-resolv-conf\") pod \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " Oct 14 13:29:25.831962 master-1 kubenswrapper[4740]: I1014 13:29:25.831882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-resolv-conf\") pod \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\" (UID: \"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67\") " Oct 14 13:29:25.832195 master-1 kubenswrapper[4740]: I1014 13:29:25.832172 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" (UID: "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:29:25.832311 master-1 kubenswrapper[4740]: I1014 13:29:25.832257 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" (UID: "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:29:25.832386 master-1 kubenswrapper[4740]: I1014 13:29:25.832333 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" (UID: "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:29:25.837160 master-1 kubenswrapper[4740]: I1014 13:29:25.837132 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-kube-api-access-6pg7b" (OuterVolumeSpecName: "kube-api-access-6pg7b") pod "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" (UID: "ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67"). InnerVolumeSpecName "kube-api-access-6pg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:29:25.933479 master-1 kubenswrapper[4740]: I1014 13:29:25.933428 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pg7b\" (UniqueName: \"kubernetes.io/projected/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-kube-api-access-6pg7b\") on node \"master-1\" DevicePath \"\"" Oct 14 13:29:25.933479 master-1 kubenswrapper[4740]: I1014 13:29:25.933474 4740 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-var-run-resolv-conf\") on node \"master-1\" DevicePath \"\"" Oct 14 13:29:25.933479 master-1 kubenswrapper[4740]: I1014 13:29:25.933488 4740 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-resolv-conf\") on node \"master-1\" DevicePath \"\"" Oct 14 13:29:25.933737 master-1 kubenswrapper[4740]: I1014 13:29:25.933499 4740 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67-host-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:29:26.474634 master-1 kubenswrapper[4740]: I1014 13:29:26.474580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" event={"ID":"8e540d3a-2514-4929-b37e-7b0908d2977e","Type":"ContainerStarted","Data":"35871e4e8eec78fe69a2e70983e0a395e5e9e999d7c4241e14bc40114019a46c"} Oct 14 13:29:26.476732 master-1 kubenswrapper[4740]: I1014 13:29:26.476700 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-mzrkb" event={"ID":"ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67","Type":"ContainerDied","Data":"0d60fb7e8da5e1cc5fc41915af909947121dca8b6f9d069bebefd95845d95026"} Oct 14 13:29:26.476813 master-1 kubenswrapper[4740]: I1014 13:29:26.476733 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d60fb7e8da5e1cc5fc41915af909947121dca8b6f9d069bebefd95845d95026" Oct 14 13:29:26.476968 master-1 kubenswrapper[4740]: I1014 13:29:26.476939 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-mzrkb" Oct 14 13:29:26.505242 master-1 kubenswrapper[4740]: I1014 13:29:26.505142 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h" podStartSLOduration=1.9164747260000001 podStartE2EDuration="10.505121007s" podCreationTimestamp="2025-10-14 13:29:16 +0000 UTC" firstStartedPulling="2025-10-14 13:29:17.080494205 +0000 UTC m=+1382.890783534" lastFinishedPulling="2025-10-14 13:29:25.669140496 +0000 UTC m=+1391.479429815" observedRunningTime="2025-10-14 13:29:26.503185825 +0000 UTC m=+1392.313475154" watchObservedRunningTime="2025-10-14 13:29:26.505121007 +0000 UTC m=+1392.315410336" Oct 14 13:30:00.183903 master-1 kubenswrapper[4740]: I1014 13:30:00.183820 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff"] Oct 14 13:30:00.184636 master-1 kubenswrapper[4740]: E1014 13:30:00.184097 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" containerName="assisted-installer-controller" Oct 14 13:30:00.184636 master-1 kubenswrapper[4740]: I1014 13:30:00.184114 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" containerName="assisted-installer-controller" Oct 14 13:30:00.184636 master-1 kubenswrapper[4740]: I1014 13:30:00.184267 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67" containerName="assisted-installer-controller" Oct 14 13:30:00.184849 master-1 kubenswrapper[4740]: I1014 13:30:00.184814 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.187159 master-1 kubenswrapper[4740]: I1014 13:30:00.187065 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-t5gjh" Oct 14 13:30:00.189139 master-1 kubenswrapper[4740]: I1014 13:30:00.189112 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 13:30:00.200388 master-1 kubenswrapper[4740]: I1014 13:30:00.200322 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff"] Oct 14 13:30:00.346058 master-1 kubenswrapper[4740]: I1014 13:30:00.345923 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-secret-volume\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.346397 master-1 kubenswrapper[4740]: I1014 13:30:00.346258 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-config-volume\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.346480 master-1 kubenswrapper[4740]: I1014 13:30:00.346424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rm2\" (UniqueName: \"kubernetes.io/projected/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-kube-api-access-f7rm2\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.447640 master-1 kubenswrapper[4740]: I1014 13:30:00.447470 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-config-volume\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.447640 master-1 kubenswrapper[4740]: I1014 13:30:00.447559 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rm2\" (UniqueName: \"kubernetes.io/projected/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-kube-api-access-f7rm2\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.447640 master-1 kubenswrapper[4740]: I1014 13:30:00.447590 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-secret-volume\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.449063 master-1 kubenswrapper[4740]: I1014 13:30:00.448979 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-config-volume\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.452645 master-1 kubenswrapper[4740]: I1014 13:30:00.452610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-secret-volume\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.473381 master-1 kubenswrapper[4740]: I1014 13:30:00.473301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rm2\" (UniqueName: \"kubernetes.io/projected/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-kube-api-access-f7rm2\") pod \"collect-profiles-29340810-2nzff\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.516047 master-1 kubenswrapper[4740]: I1014 13:30:00.515962 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:00.981862 master-1 kubenswrapper[4740]: I1014 13:30:00.981822 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff"] Oct 14 13:30:01.720358 master-1 kubenswrapper[4740]: I1014 13:30:01.720213 4740 generic.go:334] "Generic (PLEG): container finished" podID="64f32949-0e51-4ae4-9b89-3aa2a8eb237d" containerID="c88d80aeab7f9cf3f0845d077d891baa7597e2c8b2a98891a55e382c939608d0" exitCode=0 Oct 14 13:30:01.720358 master-1 kubenswrapper[4740]: I1014 13:30:01.720304 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" event={"ID":"64f32949-0e51-4ae4-9b89-3aa2a8eb237d","Type":"ContainerDied","Data":"c88d80aeab7f9cf3f0845d077d891baa7597e2c8b2a98891a55e382c939608d0"} Oct 14 13:30:01.720918 master-1 kubenswrapper[4740]: I1014 13:30:01.720373 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" event={"ID":"64f32949-0e51-4ae4-9b89-3aa2a8eb237d","Type":"ContainerStarted","Data":"79500292e61c66db3d6eae2f4867a25ce4f3427eaa66b8298618a365b14f83c8"} Oct 14 13:30:03.125742 master-1 kubenswrapper[4740]: I1014 13:30:03.125657 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:03.295466 master-1 kubenswrapper[4740]: I1014 13:30:03.295367 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7rm2\" (UniqueName: \"kubernetes.io/projected/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-kube-api-access-f7rm2\") pod \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " Oct 14 13:30:03.295808 master-1 kubenswrapper[4740]: I1014 13:30:03.295484 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-config-volume\") pod \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " Oct 14 13:30:03.295808 master-1 kubenswrapper[4740]: I1014 13:30:03.295679 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-secret-volume\") pod \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\" (UID: \"64f32949-0e51-4ae4-9b89-3aa2a8eb237d\") " Oct 14 13:30:03.296268 master-1 kubenswrapper[4740]: I1014 13:30:03.296179 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-config-volume" (OuterVolumeSpecName: "config-volume") pod "64f32949-0e51-4ae4-9b89-3aa2a8eb237d" (UID: "64f32949-0e51-4ae4-9b89-3aa2a8eb237d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:30:03.300756 master-1 kubenswrapper[4740]: I1014 13:30:03.300161 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64f32949-0e51-4ae4-9b89-3aa2a8eb237d" (UID: "64f32949-0e51-4ae4-9b89-3aa2a8eb237d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:30:03.300756 master-1 kubenswrapper[4740]: I1014 13:30:03.300190 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-kube-api-access-f7rm2" (OuterVolumeSpecName: "kube-api-access-f7rm2") pod "64f32949-0e51-4ae4-9b89-3aa2a8eb237d" (UID: "64f32949-0e51-4ae4-9b89-3aa2a8eb237d"). InnerVolumeSpecName "kube-api-access-f7rm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:30:03.398141 master-1 kubenswrapper[4740]: I1014 13:30:03.397945 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 13:30:03.398141 master-1 kubenswrapper[4740]: I1014 13:30:03.398014 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7rm2\" (UniqueName: \"kubernetes.io/projected/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-kube-api-access-f7rm2\") on node \"master-1\" DevicePath \"\"" Oct 14 13:30:03.398141 master-1 kubenswrapper[4740]: I1014 13:30:03.398044 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64f32949-0e51-4ae4-9b89-3aa2a8eb237d-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 13:30:03.637933 master-1 kubenswrapper[4740]: I1014 13:30:03.637853 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nnbg4"] Oct 14 13:30:03.638165 master-1 kubenswrapper[4740]: E1014 13:30:03.638144 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f32949-0e51-4ae4-9b89-3aa2a8eb237d" containerName="collect-profiles" Oct 14 13:30:03.638165 master-1 kubenswrapper[4740]: I1014 13:30:03.638159 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f32949-0e51-4ae4-9b89-3aa2a8eb237d" containerName="collect-profiles" Oct 14 13:30:03.638317 master-1 kubenswrapper[4740]: I1014 13:30:03.638297 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f32949-0e51-4ae4-9b89-3aa2a8eb237d" containerName="collect-profiles" Oct 14 13:30:03.641470 master-1 kubenswrapper[4740]: I1014 13:30:03.641435 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.645079 master-1 kubenswrapper[4740]: I1014 13:30:03.645038 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 14 13:30:03.645079 master-1 kubenswrapper[4740]: I1014 13:30:03.645079 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 14 13:30:03.645864 master-1 kubenswrapper[4740]: I1014 13:30:03.645285 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 14 13:30:03.648498 master-1 kubenswrapper[4740]: I1014 13:30:03.648198 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 14 13:30:03.735183 master-1 kubenswrapper[4740]: I1014 13:30:03.734652 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" event={"ID":"64f32949-0e51-4ae4-9b89-3aa2a8eb237d","Type":"ContainerDied","Data":"79500292e61c66db3d6eae2f4867a25ce4f3427eaa66b8298618a365b14f83c8"} Oct 14 13:30:03.735183 master-1 kubenswrapper[4740]: I1014 13:30:03.734695 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79500292e61c66db3d6eae2f4867a25ce4f3427eaa66b8298618a365b14f83c8" Oct 14 13:30:03.735183 master-1 kubenswrapper[4740]: I1014 13:30:03.734755 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff" Oct 14 13:30:03.754568 master-1 kubenswrapper[4740]: I1014 13:30:03.754512 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7mkjj"] Oct 14 13:30:03.756654 master-1 kubenswrapper[4740]: I1014 13:30:03.756595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7mkjj" Oct 14 13:30:03.760686 master-1 kubenswrapper[4740]: I1014 13:30:03.760655 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 14 13:30:03.764281 master-1 kubenswrapper[4740]: I1014 13:30:03.764248 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 14 13:30:03.765132 master-1 kubenswrapper[4740]: I1014 13:30:03.765104 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 14 13:30:03.807040 master-1 kubenswrapper[4740]: I1014 13:30:03.806996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-frr-sockets\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.807329 master-1 kubenswrapper[4740]: I1014 13:30:03.807315 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff61622-703c-47c7-a70a-a076562ca3a3-metrics-certs\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.807443 master-1 kubenswrapper[4740]: I1014 13:30:03.807429 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszpv\" (UniqueName: \"kubernetes.io/projected/eff61622-703c-47c7-a70a-a076562ca3a3-kube-api-access-qszpv\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.807522 master-1 kubenswrapper[4740]: I1014 13:30:03.807511 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-metrics\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.807612 master-1 kubenswrapper[4740]: I1014 13:30:03.807601 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eff61622-703c-47c7-a70a-a076562ca3a3-frr-startup\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.807693 master-1 kubenswrapper[4740]: I1014 13:30:03.807679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-frr-conf\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.807808 master-1 kubenswrapper[4740]: I1014 13:30:03.807794 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-reloader\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eff61622-703c-47c7-a70a-a076562ca3a3-frr-startup\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-frr-conf\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlw6\" (UniqueName: \"kubernetes.io/projected/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-kube-api-access-zxlw6\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-reloader\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-metrics-certs\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908936 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-frr-sockets\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.908950 master-1 kubenswrapper[4740]: I1014 13:30:03.908958 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff61622-703c-47c7-a70a-a076562ca3a3-metrics-certs\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.909476 master-1 kubenswrapper[4740]: I1014 13:30:03.908987 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:03.909476 master-1 kubenswrapper[4740]: I1014 13:30:03.909013 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qszpv\" (UniqueName: \"kubernetes.io/projected/eff61622-703c-47c7-a70a-a076562ca3a3-kube-api-access-qszpv\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.909476 master-1 kubenswrapper[4740]: I1014 13:30:03.909028 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-metrics\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.909476 master-1 kubenswrapper[4740]: I1014 13:30:03.909055 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-metallb-excludel2\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:03.910038 master-1 kubenswrapper[4740]: I1014 13:30:03.909998 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eff61622-703c-47c7-a70a-a076562ca3a3-frr-startup\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.910385 master-1 kubenswrapper[4740]: I1014 13:30:03.910347 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-frr-conf\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.910723 master-1 kubenswrapper[4740]: I1014 13:30:03.910493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-frr-sockets\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.910796 master-1 kubenswrapper[4740]: I1014 13:30:03.910645 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-metrics\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.910895 master-1 kubenswrapper[4740]: I1014 13:30:03.910663 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eff61622-703c-47c7-a70a-a076562ca3a3-reloader\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.914571 master-1 kubenswrapper[4740]: I1014 13:30:03.914525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff61622-703c-47c7-a70a-a076562ca3a3-metrics-certs\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.932551 master-1 kubenswrapper[4740]: I1014 13:30:03.932466 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qszpv\" (UniqueName: \"kubernetes.io/projected/eff61622-703c-47c7-a70a-a076562ca3a3-kube-api-access-qszpv\") pod \"frr-k8s-nnbg4\" (UID: \"eff61622-703c-47c7-a70a-a076562ca3a3\") " pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:03.959681 master-1 kubenswrapper[4740]: I1014 13:30:03.959611 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:04.010740 master-1 kubenswrapper[4740]: I1014 13:30:04.010676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-metallb-excludel2\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.010895 master-1 kubenswrapper[4740]: I1014 13:30:04.010768 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxlw6\" (UniqueName: \"kubernetes.io/projected/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-kube-api-access-zxlw6\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.010895 master-1 kubenswrapper[4740]: I1014 13:30:04.010815 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-metrics-certs\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.010895 master-1 kubenswrapper[4740]: I1014 13:30:04.010886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.011050 master-1 kubenswrapper[4740]: E1014 13:30:04.011018 4740 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 13:30:04.011110 master-1 kubenswrapper[4740]: E1014 13:30:04.011082 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist podName:59cd9872-e0ab-4acd-b8c8-1fa1fd61e318 nodeName:}" failed. No retries permitted until 2025-10-14 13:30:04.511063186 +0000 UTC m=+1430.321352525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist") pod "speaker-7mkjj" (UID: "59cd9872-e0ab-4acd-b8c8-1fa1fd61e318") : secret "metallb-memberlist" not found Oct 14 13:30:04.011500 master-1 kubenswrapper[4740]: I1014 13:30:04.011450 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-metallb-excludel2\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.013831 master-1 kubenswrapper[4740]: I1014 13:30:04.013791 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-metrics-certs\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.030252 master-1 kubenswrapper[4740]: I1014 13:30:04.030160 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxlw6\" (UniqueName: \"kubernetes.io/projected/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-kube-api-access-zxlw6\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.516803 master-1 kubenswrapper[4740]: I1014 13:30:04.516702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:04.517760 master-1 kubenswrapper[4740]: E1014 13:30:04.517066 4740 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 13:30:04.517760 master-1 kubenswrapper[4740]: E1014 13:30:04.517202 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist podName:59cd9872-e0ab-4acd-b8c8-1fa1fd61e318 nodeName:}" failed. No retries permitted until 2025-10-14 13:30:05.517173925 +0000 UTC m=+1431.327463244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist") pod "speaker-7mkjj" (UID: "59cd9872-e0ab-4acd-b8c8-1fa1fd61e318") : secret "metallb-memberlist" not found Oct 14 13:30:04.742727 master-1 kubenswrapper[4740]: I1014 13:30:04.742634 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"4e3996525ece27c22f2330e3bbc8a8044a72700c25f67aeac47bdb6cee949dc9"} Oct 14 13:30:05.532120 master-1 kubenswrapper[4740]: I1014 13:30:05.532050 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:05.536619 master-1 kubenswrapper[4740]: I1014 13:30:05.536577 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/59cd9872-e0ab-4acd-b8c8-1fa1fd61e318-memberlist\") pod \"speaker-7mkjj\" (UID: \"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318\") " pod="metallb-system/speaker-7mkjj" Oct 14 13:30:05.581574 master-1 kubenswrapper[4740]: I1014 13:30:05.581533 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7mkjj" Oct 14 13:30:05.597564 master-1 kubenswrapper[4740]: W1014 13:30:05.597507 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59cd9872_e0ab_4acd_b8c8_1fa1fd61e318.slice/crio-cd48d70076e6ab57729c245cbfc9c1f25e527b0bb078be5be6ef97db864ce915 WatchSource:0}: Error finding container cd48d70076e6ab57729c245cbfc9c1f25e527b0bb078be5be6ef97db864ce915: Status 404 returned error can't find the container with id cd48d70076e6ab57729c245cbfc9c1f25e527b0bb078be5be6ef97db864ce915 Oct 14 13:30:05.753628 master-1 kubenswrapper[4740]: I1014 13:30:05.753528 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7mkjj" event={"ID":"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318","Type":"ContainerStarted","Data":"cd48d70076e6ab57729c245cbfc9c1f25e527b0bb078be5be6ef97db864ce915"} Oct 14 13:30:08.631263 master-1 kubenswrapper[4740]: I1014 13:30:08.631158 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lkd88"] Oct 14 13:30:08.632426 master-1 kubenswrapper[4740]: I1014 13:30:08.632388 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.635895 master-1 kubenswrapper[4740]: I1014 13:30:08.635609 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 14 13:30:08.635895 master-1 kubenswrapper[4740]: I1014 13:30:08.635817 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 14 13:30:08.777392 master-1 kubenswrapper[4740]: I1014 13:30:08.777318 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvhj\" (UniqueName: \"kubernetes.io/projected/c9295a10-bbff-4e50-ae75-2fef346b2e6e-kube-api-access-khvhj\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.777392 master-1 kubenswrapper[4740]: I1014 13:30:08.777385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-dbus-socket\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.777392 master-1 kubenswrapper[4740]: I1014 13:30:08.777406 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-nmstate-lock\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.777783 master-1 kubenswrapper[4740]: I1014 13:30:08.777465 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-ovs-socket\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.831766 master-1 kubenswrapper[4740]: I1014 13:30:08.829619 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p"] Oct 14 13:30:08.831766 master-1 kubenswrapper[4740]: I1014 13:30:08.831608 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:08.834020 master-1 kubenswrapper[4740]: I1014 13:30:08.833990 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 14 13:30:08.837821 master-1 kubenswrapper[4740]: I1014 13:30:08.837612 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 14 13:30:08.840335 master-1 kubenswrapper[4740]: I1014 13:30:08.840276 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p"] Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.879481 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-ovs-socket\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.879714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khvhj\" (UniqueName: \"kubernetes.io/projected/c9295a10-bbff-4e50-ae75-2fef346b2e6e-kube-api-access-khvhj\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.879783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-dbus-socket\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.879819 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-nmstate-lock\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.879990 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-nmstate-lock\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.880054 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-ovs-socket\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.882701 master-1 kubenswrapper[4740]: I1014 13:30:08.880672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c9295a10-bbff-4e50-ae75-2fef346b2e6e-dbus-socket\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.907457 master-1 kubenswrapper[4740]: I1014 13:30:08.907377 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khvhj\" (UniqueName: \"kubernetes.io/projected/c9295a10-bbff-4e50-ae75-2fef346b2e6e-kube-api-access-khvhj\") pod \"nmstate-handler-lkd88\" (UID: \"c9295a10-bbff-4e50-ae75-2fef346b2e6e\") " pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.955712 master-1 kubenswrapper[4740]: I1014 13:30:08.955618 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:08.981544 master-1 kubenswrapper[4740]: I1014 13:30:08.981476 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:08.981683 master-1 kubenswrapper[4740]: I1014 13:30:08.981556 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:08.981683 master-1 kubenswrapper[4740]: I1014 13:30:08.981597 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959q6\" (UniqueName: \"kubernetes.io/projected/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-kube-api-access-959q6\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.076538 master-1 kubenswrapper[4740]: I1014 13:30:09.076417 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5958979c8-p9l2s"] Oct 14 13:30:09.082535 master-1 kubenswrapper[4740]: I1014 13:30:09.077893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.082535 master-1 kubenswrapper[4740]: I1014 13:30:09.080702 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r2r7j" Oct 14 13:30:09.082535 master-1 kubenswrapper[4740]: I1014 13:30:09.081556 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 13:30:09.082535 master-1 kubenswrapper[4740]: I1014 13:30:09.082438 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 13:30:09.082794 master-1 kubenswrapper[4740]: I1014 13:30:09.082616 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 13:30:09.082794 master-1 kubenswrapper[4740]: I1014 13:30:09.082785 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 13:30:09.086276 master-1 kubenswrapper[4740]: I1014 13:30:09.082910 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 13:30:09.086276 master-1 kubenswrapper[4740]: I1014 13:30:09.083653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.086276 master-1 kubenswrapper[4740]: I1014 13:30:09.083709 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.086276 master-1 kubenswrapper[4740]: I1014 13:30:09.083805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959q6\" (UniqueName: \"kubernetes.io/projected/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-kube-api-access-959q6\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.091092 master-1 kubenswrapper[4740]: I1014 13:30:09.086964 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.093491 master-1 kubenswrapper[4740]: I1014 13:30:09.093402 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.094282 master-1 kubenswrapper[4740]: I1014 13:30:09.094143 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 14 13:30:09.110170 master-1 kubenswrapper[4740]: I1014 13:30:09.109515 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5958979c8-p9l2s"] Oct 14 13:30:09.140017 master-1 kubenswrapper[4740]: I1014 13:30:09.139937 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959q6\" (UniqueName: \"kubernetes.io/projected/b24fdf4a-7fd9-4c72-a69a-4e49362f526d-kube-api-access-959q6\") pod \"nmstate-console-plugin-6b874cbd85-h8v5p\" (UID: \"b24fdf4a-7fd9-4c72-a69a-4e49362f526d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.154909 master-1 kubenswrapper[4740]: I1014 13:30:09.154832 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-oauth-serving-cert\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184804 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-serving-cert\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184825 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-service-ca\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-trusted-ca-bundle\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184872 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-config\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-oauth-config\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.184982 master-1 kubenswrapper[4740]: I1014 13:30:09.184931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbsn\" (UniqueName: \"kubernetes.io/projected/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-kube-api-access-hpbsn\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.285928 master-1 kubenswrapper[4740]: I1014 13:30:09.285721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-serving-cert\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.285928 master-1 kubenswrapper[4740]: I1014 13:30:09.285780 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-service-ca\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.285928 master-1 kubenswrapper[4740]: I1014 13:30:09.285803 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-trusted-ca-bundle\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.285928 master-1 kubenswrapper[4740]: I1014 13:30:09.285840 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-config\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.285928 master-1 kubenswrapper[4740]: I1014 13:30:09.285888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-oauth-config\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.285928 master-1 kubenswrapper[4740]: I1014 13:30:09.285926 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbsn\" (UniqueName: \"kubernetes.io/projected/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-kube-api-access-hpbsn\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.286286 master-1 kubenswrapper[4740]: I1014 13:30:09.285965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-oauth-serving-cert\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.288436 master-1 kubenswrapper[4740]: I1014 13:30:09.287699 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-oauth-serving-cert\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.288436 master-1 kubenswrapper[4740]: I1014 13:30:09.287855 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-trusted-ca-bundle\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.288882 master-1 kubenswrapper[4740]: I1014 13:30:09.288468 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-service-ca\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.290759 master-1 kubenswrapper[4740]: I1014 13:30:09.289860 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-config\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.291023 master-1 kubenswrapper[4740]: I1014 13:30:09.290835 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-serving-cert\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.291023 master-1 kubenswrapper[4740]: I1014 13:30:09.290994 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-console-oauth-config\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.313302 master-1 kubenswrapper[4740]: I1014 13:30:09.312930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbsn\" (UniqueName: \"kubernetes.io/projected/5fd95fb9-90cf-410f-9984-a31bfe8a5f76-kube-api-access-hpbsn\") pod \"console-5958979c8-p9l2s\" (UID: \"5fd95fb9-90cf-410f-9984-a31bfe8a5f76\") " pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.437184 master-1 kubenswrapper[4740]: I1014 13:30:09.437074 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:09.786301 master-1 kubenswrapper[4740]: I1014 13:30:09.786248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lkd88" event={"ID":"c9295a10-bbff-4e50-ae75-2fef346b2e6e","Type":"ContainerStarted","Data":"90e912f6fab55a874805ed93635ef4d6d09faa9db3cdd1db5806faf12dc01610"} Oct 14 13:30:12.807947 master-1 kubenswrapper[4740]: I1014 13:30:12.807885 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7mkjj" event={"ID":"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318","Type":"ContainerStarted","Data":"cc3f8634e66abd386550393c1335c0003c0c14f5fef847044d8b8ba91f2521d1"} Oct 14 13:30:12.810482 master-1 kubenswrapper[4740]: I1014 13:30:12.809902 4740 generic.go:334] "Generic (PLEG): container finished" podID="eff61622-703c-47c7-a70a-a076562ca3a3" containerID="c556587b890bba9431173c7ea9a7ad51477fd47b0bddb199c07943d90d74f61d" exitCode=0 Oct 14 13:30:12.810482 master-1 kubenswrapper[4740]: I1014 13:30:12.809953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerDied","Data":"c556587b890bba9431173c7ea9a7ad51477fd47b0bddb199c07943d90d74f61d"} Oct 14 13:30:12.840866 master-1 kubenswrapper[4740]: I1014 13:30:12.840807 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5958979c8-p9l2s"] Oct 14 13:30:12.881668 master-1 kubenswrapper[4740]: I1014 13:30:12.881614 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p"] Oct 14 13:30:12.891538 master-1 kubenswrapper[4740]: W1014 13:30:12.891482 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb24fdf4a_7fd9_4c72_a69a_4e49362f526d.slice/crio-16357fcbdcd2b857ece4580cb852a4514c14767664c51d92dd67f987cc5791a7 WatchSource:0}: Error finding container 16357fcbdcd2b857ece4580cb852a4514c14767664c51d92dd67f987cc5791a7: Status 404 returned error can't find the container with id 16357fcbdcd2b857ece4580cb852a4514c14767664c51d92dd67f987cc5791a7 Oct 14 13:30:13.818054 master-1 kubenswrapper[4740]: I1014 13:30:13.818007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5958979c8-p9l2s" event={"ID":"5fd95fb9-90cf-410f-9984-a31bfe8a5f76","Type":"ContainerStarted","Data":"25ac9718a6268c3be46744435f371d973ac32c3ea86c794852f12eb98d4c9779"} Oct 14 13:30:13.818054 master-1 kubenswrapper[4740]: I1014 13:30:13.818051 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5958979c8-p9l2s" event={"ID":"5fd95fb9-90cf-410f-9984-a31bfe8a5f76","Type":"ContainerStarted","Data":"2e5b54f2d1a03eba007848c8ea76a8792da976cad79fd86376ba7f30ce98c614"} Oct 14 13:30:13.820701 master-1 kubenswrapper[4740]: I1014 13:30:13.820656 4740 generic.go:334] "Generic (PLEG): container finished" podID="eff61622-703c-47c7-a70a-a076562ca3a3" containerID="ac792c6f321a2a4760bb3358ee580e70fd3f8c8a1d2544a450cc51e6561b42fb" exitCode=0 Oct 14 13:30:13.820701 master-1 kubenswrapper[4740]: I1014 13:30:13.820679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerDied","Data":"ac792c6f321a2a4760bb3358ee580e70fd3f8c8a1d2544a450cc51e6561b42fb"} Oct 14 13:30:13.823564 master-1 kubenswrapper[4740]: I1014 13:30:13.823537 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" event={"ID":"b24fdf4a-7fd9-4c72-a69a-4e49362f526d","Type":"ContainerStarted","Data":"16357fcbdcd2b857ece4580cb852a4514c14767664c51d92dd67f987cc5791a7"} Oct 14 13:30:13.850388 master-1 kubenswrapper[4740]: I1014 13:30:13.850311 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5958979c8-p9l2s" podStartSLOduration=4.850294502 podStartE2EDuration="4.850294502s" podCreationTimestamp="2025-10-14 13:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:30:13.847546089 +0000 UTC m=+1439.657835418" watchObservedRunningTime="2025-10-14 13:30:13.850294502 +0000 UTC m=+1439.660583831" Oct 14 13:30:14.833433 master-1 kubenswrapper[4740]: I1014 13:30:14.833357 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7mkjj" event={"ID":"59cd9872-e0ab-4acd-b8c8-1fa1fd61e318","Type":"ContainerStarted","Data":"33a32be3e806857580125cd5f49a64d298971f031183c923086d710c31fb9498"} Oct 14 13:30:14.833923 master-1 kubenswrapper[4740]: I1014 13:30:14.833506 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7mkjj" Oct 14 13:30:14.838799 master-1 kubenswrapper[4740]: I1014 13:30:14.838746 4740 generic.go:334] "Generic (PLEG): container finished" podID="eff61622-703c-47c7-a70a-a076562ca3a3" containerID="e66fe8bc097b3f5ca878a4b47322e024a990e61fc6a8677e1c9c0312efb8cc88" exitCode=0 Oct 14 13:30:14.838857 master-1 kubenswrapper[4740]: I1014 13:30:14.838835 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerDied","Data":"e66fe8bc097b3f5ca878a4b47322e024a990e61fc6a8677e1c9c0312efb8cc88"} Oct 14 13:30:14.840218 master-1 kubenswrapper[4740]: I1014 13:30:14.840174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lkd88" event={"ID":"c9295a10-bbff-4e50-ae75-2fef346b2e6e","Type":"ContainerStarted","Data":"6301c8098cfac5cfe8d8b1c4c2cfa6a4707e105760c0955278c4f3b732f8efe8"} Oct 14 13:30:14.921128 master-1 kubenswrapper[4740]: I1014 13:30:14.921046 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7mkjj" podStartSLOduration=3.377851509 podStartE2EDuration="11.921024536s" podCreationTimestamp="2025-10-14 13:30:03 +0000 UTC" firstStartedPulling="2025-10-14 13:30:05.599925678 +0000 UTC m=+1431.410215007" lastFinishedPulling="2025-10-14 13:30:14.143098705 +0000 UTC m=+1439.953388034" observedRunningTime="2025-10-14 13:30:14.916739032 +0000 UTC m=+1440.727028381" watchObservedRunningTime="2025-10-14 13:30:14.921024536 +0000 UTC m=+1440.731313865" Oct 14 13:30:14.980470 master-1 kubenswrapper[4740]: I1014 13:30:14.980343 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lkd88" podStartSLOduration=1.83953905 podStartE2EDuration="6.980318844s" podCreationTimestamp="2025-10-14 13:30:08 +0000 UTC" firstStartedPulling="2025-10-14 13:30:08.997899723 +0000 UTC m=+1434.808189062" lastFinishedPulling="2025-10-14 13:30:14.138679527 +0000 UTC m=+1439.948968856" observedRunningTime="2025-10-14 13:30:14.975691871 +0000 UTC m=+1440.785981200" watchObservedRunningTime="2025-10-14 13:30:14.980318844 +0000 UTC m=+1440.790608213" Oct 14 13:30:15.847191 master-1 kubenswrapper[4740]: I1014 13:30:15.847143 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" event={"ID":"b24fdf4a-7fd9-4c72-a69a-4e49362f526d","Type":"ContainerStarted","Data":"9905dac8c190755624e49d2b63b2779a030e60b2bff1ca897c799da3e6b71cc1"} Oct 14 13:30:15.854595 master-1 kubenswrapper[4740]: I1014 13:30:15.852143 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"297bcb9089fce368234d1415da82e73183023891f372dcb66e409f63eee510c4"} Oct 14 13:30:15.854595 master-1 kubenswrapper[4740]: I1014 13:30:15.852202 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"217fcc621de005793a8efe25a2010e9818dbc77fdc3f10cce583096f3f6ff9d5"} Oct 14 13:30:15.854595 master-1 kubenswrapper[4740]: I1014 13:30:15.852220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"ef69601abddfaa273c7cc645d80d2426afa5f9fe14d1c118e1a939f12aa0ecd5"} Oct 14 13:30:15.854595 master-1 kubenswrapper[4740]: I1014 13:30:15.852557 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:15.871159 master-1 kubenswrapper[4740]: I1014 13:30:15.871076 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p" podStartSLOduration=5.396528548 podStartE2EDuration="7.871059128s" podCreationTimestamp="2025-10-14 13:30:08 +0000 UTC" firstStartedPulling="2025-10-14 13:30:12.89456962 +0000 UTC m=+1438.704858949" lastFinishedPulling="2025-10-14 13:30:15.3691002 +0000 UTC m=+1441.179389529" observedRunningTime="2025-10-14 13:30:15.867685008 +0000 UTC m=+1441.677974337" watchObservedRunningTime="2025-10-14 13:30:15.871059128 +0000 UTC m=+1441.681348457" Oct 14 13:30:16.869017 master-1 kubenswrapper[4740]: I1014 13:30:16.868926 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"040f6bb0a99b1b73d5d68ecddd426fbddbe6308d9d3657750eb8fe8f395b7eea"} Oct 14 13:30:16.869017 master-1 kubenswrapper[4740]: I1014 13:30:16.869001 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"313dd8018a0bfa3b0381c00c90ccc834fa376228bd35690c4bb36ff3a8a81f67"} Oct 14 13:30:16.869017 master-1 kubenswrapper[4740]: I1014 13:30:16.869027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nnbg4" event={"ID":"eff61622-703c-47c7-a70a-a076562ca3a3","Type":"ContainerStarted","Data":"2a2798a16a60726828c633145a4613f5618041734270c5df04538920477a775f"} Oct 14 13:30:16.920876 master-1 kubenswrapper[4740]: I1014 13:30:16.920740 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nnbg4" podStartSLOduration=5.550884856 podStartE2EDuration="13.92071249s" podCreationTimestamp="2025-10-14 13:30:03 +0000 UTC" firstStartedPulling="2025-10-14 13:30:04.047746352 +0000 UTC m=+1429.858035681" lastFinishedPulling="2025-10-14 13:30:12.417573986 +0000 UTC m=+1438.227863315" observedRunningTime="2025-10-14 13:30:16.91128768 +0000 UTC m=+1442.721577069" watchObservedRunningTime="2025-10-14 13:30:16.92071249 +0000 UTC m=+1442.731001849" Oct 14 13:30:17.877892 master-1 kubenswrapper[4740]: I1014 13:30:17.877802 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:18.961647 master-1 kubenswrapper[4740]: I1014 13:30:18.961571 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:19.031007 master-1 kubenswrapper[4740]: I1014 13:30:19.030951 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:19.438267 master-1 kubenswrapper[4740]: I1014 13:30:19.438167 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:19.438267 master-1 kubenswrapper[4740]: I1014 13:30:19.438263 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:19.443418 master-1 kubenswrapper[4740]: I1014 13:30:19.443382 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:19.895600 master-1 kubenswrapper[4740]: I1014 13:30:19.895516 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5958979c8-p9l2s" Oct 14 13:30:23.984985 master-1 kubenswrapper[4740]: I1014 13:30:23.984885 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lkd88" Oct 14 13:30:25.587775 master-1 kubenswrapper[4740]: I1014 13:30:25.587733 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7mkjj" Oct 14 13:30:33.972510 master-1 kubenswrapper[4740]: I1014 13:30:33.972421 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nnbg4" Oct 14 13:30:36.577300 master-1 kubenswrapper[4740]: I1014 13:30:36.577209 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-jdht5"] Oct 14 13:30:36.578143 master-1 kubenswrapper[4740]: I1014 13:30:36.578116 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.581263 master-1 kubenswrapper[4740]: I1014 13:30:36.581207 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Oct 14 13:30:36.581369 master-1 kubenswrapper[4740]: I1014 13:30:36.581262 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Oct 14 13:30:36.581525 master-1 kubenswrapper[4740]: I1014 13:30:36.581459 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Oct 14 13:30:36.644997 master-1 kubenswrapper[4740]: I1014 13:30:36.644910 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-jdht5"] Oct 14 13:30:36.715095 master-1 kubenswrapper[4740]: I1014 13:30:36.715027 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-file-lock-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-sys\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715163 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-run-udev\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-registration-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715221 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-device-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715297 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-node-plugin-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-lvmd-config\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-csi-plugin-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715388 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-pod-volumes-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715430 master-1 kubenswrapper[4740]: I1014 13:30:36.715430 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nrl\" (UniqueName: \"kubernetes.io/projected/9ed8f94b-69a3-411a-a4ef-a362b092dac5-kube-api-access-x7nrl\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.715801 master-1 kubenswrapper[4740]: I1014 13:30:36.715470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed8f94b-69a3-411a-a4ef-a362b092dac5-metrics-cert\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816260 master-1 kubenswrapper[4740]: I1014 13:30:36.816174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-file-lock-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816260 master-1 kubenswrapper[4740]: I1014 13:30:36.816220 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-sys\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816260 master-1 kubenswrapper[4740]: I1014 13:30:36.816266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-run-udev\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-registration-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816305 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-device-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816324 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-node-plugin-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816349 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-lvmd-config\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-csi-plugin-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816410 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-pod-volumes-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816416 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-sys\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-run-udev\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7nrl\" (UniqueName: \"kubernetes.io/projected/9ed8f94b-69a3-411a-a4ef-a362b092dac5-kube-api-access-x7nrl\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-device-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816631 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed8f94b-69a3-411a-a4ef-a362b092dac5-metrics-cert\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.816704 master-1 kubenswrapper[4740]: I1014 13:30:36.816644 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-lvmd-config\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.817644 master-1 kubenswrapper[4740]: I1014 13:30:36.816950 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-pod-volumes-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.817644 master-1 kubenswrapper[4740]: I1014 13:30:36.816982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-registration-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.817644 master-1 kubenswrapper[4740]: I1014 13:30:36.817002 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-csi-plugin-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.817644 master-1 kubenswrapper[4740]: I1014 13:30:36.817046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-node-plugin-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.817644 master-1 kubenswrapper[4740]: I1014 13:30:36.817440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9ed8f94b-69a3-411a-a4ef-a362b092dac5-file-lock-dir\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.822712 master-1 kubenswrapper[4740]: I1014 13:30:36.822633 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9ed8f94b-69a3-411a-a4ef-a362b092dac5-metrics-cert\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.837971 master-1 kubenswrapper[4740]: I1014 13:30:36.837867 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7nrl\" (UniqueName: \"kubernetes.io/projected/9ed8f94b-69a3-411a-a4ef-a362b092dac5-kube-api-access-x7nrl\") pod \"vg-manager-jdht5\" (UID: \"9ed8f94b-69a3-411a-a4ef-a362b092dac5\") " pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:36.937511 master-1 kubenswrapper[4740]: I1014 13:30:36.937422 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:37.435265 master-1 kubenswrapper[4740]: W1014 13:30:37.435181 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed8f94b_69a3_411a_a4ef_a362b092dac5.slice/crio-0d52e8832197ede6481c2b189a773fff32d898ddd8061efe4b20982938620e9a WatchSource:0}: Error finding container 0d52e8832197ede6481c2b189a773fff32d898ddd8061efe4b20982938620e9a: Status 404 returned error can't find the container with id 0d52e8832197ede6481c2b189a773fff32d898ddd8061efe4b20982938620e9a Oct 14 13:30:37.435265 master-1 kubenswrapper[4740]: I1014 13:30:37.435212 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-jdht5"] Oct 14 13:30:38.038303 master-1 kubenswrapper[4740]: I1014 13:30:38.038076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-jdht5" event={"ID":"9ed8f94b-69a3-411a-a4ef-a362b092dac5","Type":"ContainerStarted","Data":"0d52e8832197ede6481c2b189a773fff32d898ddd8061efe4b20982938620e9a"} Oct 14 13:30:43.077495 master-1 kubenswrapper[4740]: I1014 13:30:43.077401 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-jdht5" event={"ID":"9ed8f94b-69a3-411a-a4ef-a362b092dac5","Type":"ContainerStarted","Data":"c77ba4aed48a451b6cfce3fc0131620b688bf6e0e48731ced45070a65ba4d40f"} Oct 14 13:30:43.113586 master-1 kubenswrapper[4740]: I1014 13:30:43.113183 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-jdht5" podStartSLOduration=2.071739652 podStartE2EDuration="7.113143402s" podCreationTimestamp="2025-10-14 13:30:36 +0000 UTC" firstStartedPulling="2025-10-14 13:30:37.444274495 +0000 UTC m=+1463.254563824" lastFinishedPulling="2025-10-14 13:30:42.485678245 +0000 UTC m=+1468.295967574" observedRunningTime="2025-10-14 13:30:43.104208145 +0000 UTC m=+1468.914497474" watchObservedRunningTime="2025-10-14 13:30:43.113143402 +0000 UTC m=+1468.923432731" Oct 14 13:30:45.093491 master-1 kubenswrapper[4740]: I1014 13:30:45.093431 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-jdht5_9ed8f94b-69a3-411a-a4ef-a362b092dac5/vg-manager/0.log" Oct 14 13:30:45.094075 master-1 kubenswrapper[4740]: I1014 13:30:45.093502 4740 generic.go:334] "Generic (PLEG): container finished" podID="9ed8f94b-69a3-411a-a4ef-a362b092dac5" containerID="c77ba4aed48a451b6cfce3fc0131620b688bf6e0e48731ced45070a65ba4d40f" exitCode=1 Oct 14 13:30:45.094075 master-1 kubenswrapper[4740]: I1014 13:30:45.093544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-jdht5" event={"ID":"9ed8f94b-69a3-411a-a4ef-a362b092dac5","Type":"ContainerDied","Data":"c77ba4aed48a451b6cfce3fc0131620b688bf6e0e48731ced45070a65ba4d40f"} Oct 14 13:30:45.094879 master-1 kubenswrapper[4740]: I1014 13:30:45.094632 4740 scope.go:117] "RemoveContainer" containerID="c77ba4aed48a451b6cfce3fc0131620b688bf6e0e48731ced45070a65ba4d40f" Oct 14 13:30:45.387523 master-1 kubenswrapper[4740]: I1014 13:30:45.387347 4740 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Oct 14 13:30:45.898674 master-1 kubenswrapper[4740]: I1014 13:30:45.898323 4740 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2025-10-14T13:30:45.387380764Z","Handler":null,"Name":""} Oct 14 13:30:45.903029 master-1 kubenswrapper[4740]: I1014 13:30:45.902986 4740 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Oct 14 13:30:45.903126 master-1 kubenswrapper[4740]: I1014 13:30:45.903067 4740 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Oct 14 13:30:46.101113 master-1 kubenswrapper[4740]: I1014 13:30:46.101075 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-jdht5_9ed8f94b-69a3-411a-a4ef-a362b092dac5/vg-manager/0.log" Oct 14 13:30:46.101691 master-1 kubenswrapper[4740]: I1014 13:30:46.101131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-jdht5" event={"ID":"9ed8f94b-69a3-411a-a4ef-a362b092dac5","Type":"ContainerStarted","Data":"4677474709fbc16e0132d92166ea7f659157fc786c5b605ce8336d5bd2a35b77"} Oct 14 13:30:46.938686 master-1 kubenswrapper[4740]: I1014 13:30:46.938586 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:56.941971 master-1 kubenswrapper[4740]: I1014 13:30:56.941883 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:56.942987 master-1 kubenswrapper[4740]: I1014 13:30:56.942371 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:56.959624 master-1 kubenswrapper[4740]: I1014 13:30:56.959570 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-jdht5" Oct 14 13:30:59.829790 master-1 kubenswrapper[4740]: I1014 13:30:59.829643 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-1"] Oct 14 13:30:59.831674 master-1 kubenswrapper[4740]: I1014 13:30:59.831627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.836390 master-1 kubenswrapper[4740]: I1014 13:30:59.835063 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-p7d8w" Oct 14 13:30:59.868869 master-1 kubenswrapper[4740]: I1014 13:30:59.865702 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-1"] Oct 14 13:30:59.890067 master-1 kubenswrapper[4740]: I1014 13:30:59.890027 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.890259 master-1 kubenswrapper[4740]: I1014 13:30:59.890245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-var-lock\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.890379 master-1 kubenswrapper[4740]: I1014 13:30:59.890366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kube-api-access\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.991539 master-1 kubenswrapper[4740]: I1014 13:30:59.991467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.991794 master-1 kubenswrapper[4740]: I1014 13:30:59.991556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-var-lock\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.991794 master-1 kubenswrapper[4740]: I1014 13:30:59.991588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kubelet-dir\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.991794 master-1 kubenswrapper[4740]: I1014 13:30:59.991620 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kube-api-access\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:30:59.991933 master-1 kubenswrapper[4740]: I1014 13:30:59.991675 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-var-lock\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:31:00.016165 master-1 kubenswrapper[4740]: I1014 13:31:00.016124 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kube-api-access\") pod \"installer-6-master-1\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:31:00.159573 master-1 kubenswrapper[4740]: I1014 13:31:00.159431 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:31:00.621117 master-1 kubenswrapper[4740]: I1014 13:31:00.621029 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-1"] Oct 14 13:31:01.233704 master-1 kubenswrapper[4740]: I1014 13:31:01.233535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33","Type":"ContainerStarted","Data":"401b4fec8eae3dc52ecd1577386c02b6cc54fc45c82cb6c37b5c8be129623672"} Oct 14 13:31:01.234510 master-1 kubenswrapper[4740]: I1014 13:31:01.234441 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33","Type":"ContainerStarted","Data":"b261f4ab56481d56cd699caa8f0e1f2d340f2236dd4bc5c998f27824ca048ad9"} Oct 14 13:31:01.265511 master-1 kubenswrapper[4740]: I1014 13:31:01.265371 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-1" podStartSLOduration=2.265337521 podStartE2EDuration="2.265337521s" podCreationTimestamp="2025-10-14 13:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:31:01.263342467 +0000 UTC m=+1487.073631836" watchObservedRunningTime="2025-10-14 13:31:01.265337521 +0000 UTC m=+1487.075626880" Oct 14 13:31:03.707475 master-1 kubenswrapper[4740]: I1014 13:31:03.707400 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 14 13:31:03.851572 master-1 kubenswrapper[4740]: I1014 13:31:03.851316 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-5-master-1"] Oct 14 13:31:04.959929 master-1 kubenswrapper[4740]: I1014 13:31:04.959865 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6927f794-9b47-4a35-b412-78b7d24f7622" path="/var/lib/kubelet/pods/6927f794-9b47-4a35-b412-78b7d24f7622/volumes" Oct 14 13:31:25.774870 master-1 kubenswrapper[4740]: I1014 13:31:25.774799 4740 scope.go:117] "RemoveContainer" containerID="2b8339850f796f4cefb3b4fee56f3c30a156abd91eaf2c144f467486b31d4bff" Oct 14 13:31:39.226918 master-1 kubenswrapper[4740]: I1014 13:31:39.226836 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:31:39.227888 master-1 kubenswrapper[4740]: I1014 13:31:39.227492 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" containerID="cri-o://ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346" gracePeriod=135 Oct 14 13:31:39.227888 master-1 kubenswrapper[4740]: I1014 13:31:39.227553 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179" gracePeriod=135 Oct 14 13:31:39.227888 master-1 kubenswrapper[4740]: I1014 13:31:39.227685 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d" gracePeriod=135 Oct 14 13:31:39.227888 master-1 kubenswrapper[4740]: I1014 13:31:39.227684 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958" gracePeriod=135 Oct 14 13:31:39.227888 master-1 kubenswrapper[4740]: I1014 13:31:39.227669 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570" gracePeriod=135 Oct 14 13:31:39.229864 master-1 kubenswrapper[4740]: I1014 13:31:39.229797 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:31:39.230341 master-1 kubenswrapper[4740]: E1014 13:31:39.230290 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" Oct 14 13:31:39.230341 master-1 kubenswrapper[4740]: I1014 13:31:39.230333 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: E1014 13:31:39.230369 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: I1014 13:31:39.230388 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: E1014 13:31:39.230423 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: I1014 13:31:39.230440 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: E1014 13:31:39.230459 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: I1014 13:31:39.230474 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: E1014 13:31:39.230496 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="setup" Oct 14 13:31:39.230515 master-1 kubenswrapper[4740]: I1014 13:31:39.230515 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="setup" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: E1014 13:31:39.230553 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: I1014 13:31:39.230571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: I1014 13:31:39.230880 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-check-endpoints" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: I1014 13:31:39.230922 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-regeneration-controller" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: I1014 13:31:39.230962 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-cert-syncer" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: I1014 13:31:39.230987 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver-insecure-readyz" Oct 14 13:31:39.231084 master-1 kubenswrapper[4740]: I1014 13:31:39.231005 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d61efaa0f96869cf2939026aad6022" containerName="kube-apiserver" Oct 14 13:31:39.335759 master-1 kubenswrapper[4740]: I1014 13:31:39.335685 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.335759 master-1 kubenswrapper[4740]: I1014 13:31:39.335757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.336111 master-1 kubenswrapper[4740]: I1014 13:31:39.335882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.438895 master-1 kubenswrapper[4740]: I1014 13:31:39.438763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.439092 master-1 kubenswrapper[4740]: I1014 13:31:39.438892 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-audit-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.441854 master-1 kubenswrapper[4740]: I1014 13:31:39.441800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.442040 master-1 kubenswrapper[4740]: I1014 13:31:39.442008 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-resource-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.442114 master-1 kubenswrapper[4740]: I1014 13:31:39.442070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.442194 master-1 kubenswrapper[4740]: I1014 13:31:39.442165 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/23141951a25391899fad7b9f2d5b6739-cert-dir\") pod \"kube-apiserver-master-1\" (UID: \"23141951a25391899fad7b9f2d5b6739\") " pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:31:39.589906 master-1 kubenswrapper[4740]: I1014 13:31:39.589796 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 14 13:31:39.591071 master-1 kubenswrapper[4740]: I1014 13:31:39.590740 4740 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179" exitCode=0 Oct 14 13:31:39.591071 master-1 kubenswrapper[4740]: I1014 13:31:39.590788 4740 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958" exitCode=0 Oct 14 13:31:39.591071 master-1 kubenswrapper[4740]: I1014 13:31:39.590804 4740 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570" exitCode=0 Oct 14 13:31:39.591071 master-1 kubenswrapper[4740]: I1014 13:31:39.590820 4740 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d" exitCode=2 Oct 14 13:31:39.593080 master-1 kubenswrapper[4740]: I1014 13:31:39.593017 4740 generic.go:334] "Generic (PLEG): container finished" podID="47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" containerID="401b4fec8eae3dc52ecd1577386c02b6cc54fc45c82cb6c37b5c8be129623672" exitCode=0 Oct 14 13:31:39.593080 master-1 kubenswrapper[4740]: I1014 13:31:39.593065 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33","Type":"ContainerDied","Data":"401b4fec8eae3dc52ecd1577386c02b6cc54fc45c82cb6c37b5c8be129623672"} Oct 14 13:31:39.644068 master-1 kubenswrapper[4740]: I1014 13:31:39.643627 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 14 13:31:41.000515 master-1 kubenswrapper[4740]: I1014 13:31:41.000450 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.069540 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kube-api-access\") pod \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.069726 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-var-lock\") pod \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.069759 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kubelet-dir\") pod \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\" (UID: \"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33\") " Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.069863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-var-lock" (OuterVolumeSpecName: "var-lock") pod "47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" (UID: "47cf6c4d-eb3d-4ac3-b813-f53661dbaa33"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.069994 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" (UID: "47cf6c4d-eb3d-4ac3-b813-f53661dbaa33"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.070407 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-var-lock\") on node \"master-1\" DevicePath \"\"" Oct 14 13:31:41.073743 master-1 kubenswrapper[4740]: I1014 13:31:41.070429 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:31:41.077968 master-1 kubenswrapper[4740]: I1014 13:31:41.077891 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" (UID: "47cf6c4d-eb3d-4ac3-b813-f53661dbaa33"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:41.172352 master-1 kubenswrapper[4740]: I1014 13:31:41.172276 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47cf6c4d-eb3d-4ac3-b813-f53661dbaa33-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:31:41.606746 master-1 kubenswrapper[4740]: I1014 13:31:41.606675 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-1" event={"ID":"47cf6c4d-eb3d-4ac3-b813-f53661dbaa33","Type":"ContainerDied","Data":"b261f4ab56481d56cd699caa8f0e1f2d340f2236dd4bc5c998f27824ca048ad9"} Oct 14 13:31:41.606746 master-1 kubenswrapper[4740]: I1014 13:31:41.606727 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b261f4ab56481d56cd699caa8f0e1f2d340f2236dd4bc5c998f27824ca048ad9" Oct 14 13:31:41.607050 master-1 kubenswrapper[4740]: I1014 13:31:41.606787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-1" Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: I1014 13:31:43.961498 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:31:43.961609 master-1 kubenswrapper[4740]: I1014 13:31:43.961583 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: I1014 13:31:48.962219 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:31:48.962339 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:31:48.965138 master-1 kubenswrapper[4740]: I1014 13:31:48.962371 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: I1014 13:31:53.961274 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:31:53.961367 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:31:53.963691 master-1 kubenswrapper[4740]: I1014 13:31:53.961377 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:31:53.963691 master-1 kubenswrapper[4740]: I1014 13:31:53.961562 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: I1014 13:31:53.970223 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:31:53.970303 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:31:53.971943 master-1 kubenswrapper[4740]: I1014 13:31:53.970329 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: I1014 13:31:58.961715 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:31:58.961839 master-1 kubenswrapper[4740]: I1014 13:31:58.961794 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: I1014 13:32:03.964835 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:03.965001 master-1 kubenswrapper[4740]: I1014 13:32:03.964932 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: I1014 13:32:08.964210 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:08.964374 master-1 kubenswrapper[4740]: I1014 13:32:08.964344 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: I1014 13:32:13.959437 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:13.959510 master-1 kubenswrapper[4740]: I1014 13:32:13.959498 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: I1014 13:32:18.963304 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:18.963398 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:18.966461 master-1 kubenswrapper[4740]: I1014 13:32:18.963408 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: I1014 13:32:23.961762 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:23.961854 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:23.966413 master-1 kubenswrapper[4740]: I1014 13:32:23.962152 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: I1014 13:32:28.963670 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:28.964301 master-1 kubenswrapper[4740]: I1014 13:32:28.963772 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: I1014 13:32:33.969612 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:33.969724 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:33.972348 master-1 kubenswrapper[4740]: I1014 13:32:33.969726 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: I1014 13:32:38.964103 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:38.964213 master-1 kubenswrapper[4740]: I1014 13:32:38.964192 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: I1014 13:32:43.965408 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:43.965567 master-1 kubenswrapper[4740]: I1014 13:32:43.965478 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: I1014 13:32:48.964055 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/rbac/bootstrap-roles ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: [-]shutdown failed: reason withheld Oct 14 13:32:48.964140 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:32:48.966987 master-1 kubenswrapper[4740]: I1014 13:32:48.964148 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:32:51.811514 master-1 kubenswrapper[4740]: I1014 13:32:51.811462 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 14 13:32:51.812940 master-1 kubenswrapper[4740]: I1014 13:32:51.812800 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:32:51.819165 master-1 kubenswrapper[4740]: I1014 13:32:51.819112 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 14 13:32:51.936419 master-1 kubenswrapper[4740]: I1014 13:32:51.936216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") pod \"42d61efaa0f96869cf2939026aad6022\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " Oct 14 13:32:51.936419 master-1 kubenswrapper[4740]: I1014 13:32:51.936321 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") pod \"42d61efaa0f96869cf2939026aad6022\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " Oct 14 13:32:51.936419 master-1 kubenswrapper[4740]: I1014 13:32:51.936371 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "42d61efaa0f96869cf2939026aad6022" (UID: "42d61efaa0f96869cf2939026aad6022"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:32:51.936801 master-1 kubenswrapper[4740]: I1014 13:32:51.936522 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") pod \"42d61efaa0f96869cf2939026aad6022\" (UID: \"42d61efaa0f96869cf2939026aad6022\") " Oct 14 13:32:51.936801 master-1 kubenswrapper[4740]: I1014 13:32:51.936537 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "42d61efaa0f96869cf2939026aad6022" (UID: "42d61efaa0f96869cf2939026aad6022"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:32:51.936801 master-1 kubenswrapper[4740]: I1014 13:32:51.936633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "42d61efaa0f96869cf2939026aad6022" (UID: "42d61efaa0f96869cf2939026aad6022"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:32:51.936987 master-1 kubenswrapper[4740]: I1014 13:32:51.936873 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-cert-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:32:51.936987 master-1 kubenswrapper[4740]: I1014 13:32:51.936889 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-audit-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:32:51.936987 master-1 kubenswrapper[4740]: I1014 13:32:51.936900 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/42d61efaa0f96869cf2939026aad6022-resource-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:32:52.198895 master-1 kubenswrapper[4740]: I1014 13:32:52.198698 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-1_42d61efaa0f96869cf2939026aad6022/kube-apiserver-cert-syncer/0.log" Oct 14 13:32:52.200435 master-1 kubenswrapper[4740]: I1014 13:32:52.200347 4740 generic.go:334] "Generic (PLEG): container finished" podID="42d61efaa0f96869cf2939026aad6022" containerID="ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346" exitCode=0 Oct 14 13:32:52.200577 master-1 kubenswrapper[4740]: I1014 13:32:52.200454 4740 scope.go:117] "RemoveContainer" containerID="8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179" Oct 14 13:32:52.200577 master-1 kubenswrapper[4740]: I1014 13:32:52.200462 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:32:52.209538 master-1 kubenswrapper[4740]: I1014 13:32:52.209442 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 14 13:32:52.220055 master-1 kubenswrapper[4740]: I1014 13:32:52.219977 4740 scope.go:117] "RemoveContainer" containerID="359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958" Oct 14 13:32:52.229498 master-1 kubenswrapper[4740]: I1014 13:32:52.229434 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zf6rs" Oct 14 13:32:52.237276 master-1 kubenswrapper[4740]: I1014 13:32:52.237176 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-1" oldPodUID="42d61efaa0f96869cf2939026aad6022" podUID="23141951a25391899fad7b9f2d5b6739" Oct 14 13:32:52.238455 master-1 kubenswrapper[4740]: I1014 13:32:52.238372 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 14 13:32:52.249751 master-1 kubenswrapper[4740]: I1014 13:32:52.249684 4740 scope.go:117] "RemoveContainer" containerID="8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570" Oct 14 13:32:52.257962 master-1 kubenswrapper[4740]: I1014 13:32:52.257901 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 14 13:32:52.259285 master-1 kubenswrapper[4740]: I1014 13:32:52.259241 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 14 13:32:52.265509 master-1 kubenswrapper[4740]: I1014 13:32:52.265486 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 14 13:32:52.266076 master-1 kubenswrapper[4740]: I1014 13:32:52.266045 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 14 13:32:52.267600 master-1 kubenswrapper[4740]: I1014 13:32:52.267567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 14 13:32:52.271119 master-1 kubenswrapper[4740]: I1014 13:32:52.271080 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-5vqgl" Oct 14 13:32:52.271783 master-1 kubenswrapper[4740]: I1014 13:32:52.271748 4740 scope.go:117] "RemoveContainer" containerID="2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d" Oct 14 13:32:52.288373 master-1 kubenswrapper[4740]: I1014 13:32:52.288303 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 13:32:52.291261 master-1 kubenswrapper[4740]: I1014 13:32:52.291213 4740 scope.go:117] "RemoveContainer" containerID="ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346" Oct 14 13:32:52.298733 master-1 kubenswrapper[4740]: I1014 13:32:52.298673 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Oct 14 13:32:52.303660 master-1 kubenswrapper[4740]: I1014 13:32:52.303631 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 13:32:52.319780 master-1 kubenswrapper[4740]: I1014 13:32:52.319747 4740 scope.go:117] "RemoveContainer" containerID="82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f" Oct 14 13:32:52.328020 master-1 kubenswrapper[4740]: I1014 13:32:52.327939 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 14 13:32:52.349170 master-1 kubenswrapper[4740]: I1014 13:32:52.349129 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 14 13:32:52.350745 master-1 kubenswrapper[4740]: I1014 13:32:52.350632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 14 13:32:52.350954 master-1 kubenswrapper[4740]: I1014 13:32:52.350901 4740 scope.go:117] "RemoveContainer" containerID="8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179" Oct 14 13:32:52.351259 master-1 kubenswrapper[4740]: I1014 13:32:52.351160 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 14 13:32:52.351575 master-1 kubenswrapper[4740]: E1014 13:32:52.351526 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179\": container with ID starting with 8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179 not found: ID does not exist" containerID="8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179" Oct 14 13:32:52.351646 master-1 kubenswrapper[4740]: I1014 13:32:52.351582 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179"} err="failed to get container status \"8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179\": rpc error: code = NotFound desc = could not find container \"8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179\": container with ID starting with 8b5990aad37dd35bc0f18889201f5197673dc34a90696624d7bdde069fbb2179 not found: ID does not exist" Oct 14 13:32:52.351646 master-1 kubenswrapper[4740]: I1014 13:32:52.351623 4740 scope.go:117] "RemoveContainer" containerID="359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958" Oct 14 13:32:52.352102 master-1 kubenswrapper[4740]: E1014 13:32:52.352056 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958\": container with ID starting with 359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958 not found: ID does not exist" containerID="359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958" Oct 14 13:32:52.352298 master-1 kubenswrapper[4740]: I1014 13:32:52.352097 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958"} err="failed to get container status \"359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958\": rpc error: code = NotFound desc = could not find container \"359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958\": container with ID starting with 359347b2fea375c71f4f41255643f80a0bc469da0ce01683e8524cdf9a16c958 not found: ID does not exist" Oct 14 13:32:52.352298 master-1 kubenswrapper[4740]: I1014 13:32:52.352118 4740 scope.go:117] "RemoveContainer" containerID="8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570" Oct 14 13:32:52.352468 master-1 kubenswrapper[4740]: E1014 13:32:52.352428 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570\": container with ID starting with 8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570 not found: ID does not exist" containerID="8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570" Oct 14 13:32:52.352468 master-1 kubenswrapper[4740]: I1014 13:32:52.352460 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570"} err="failed to get container status \"8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570\": rpc error: code = NotFound desc = could not find container \"8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570\": container with ID starting with 8633fc7616074e693d3ebc243a32a6fe6eaee31b310f9c941ffb7a6a3f02b570 not found: ID does not exist" Oct 14 13:32:52.352567 master-1 kubenswrapper[4740]: I1014 13:32:52.352479 4740 scope.go:117] "RemoveContainer" containerID="2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d" Oct 14 13:32:52.352892 master-1 kubenswrapper[4740]: E1014 13:32:52.352842 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d\": container with ID starting with 2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d not found: ID does not exist" containerID="2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d" Oct 14 13:32:52.352951 master-1 kubenswrapper[4740]: I1014 13:32:52.352891 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d"} err="failed to get container status \"2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d\": rpc error: code = NotFound desc = could not find container \"2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d\": container with ID starting with 2294877b8d2076a2cba9eb12712c84d54a50c4ae4dc6a8e5fd838facd22b702d not found: ID does not exist" Oct 14 13:32:52.352951 master-1 kubenswrapper[4740]: I1014 13:32:52.352918 4740 scope.go:117] "RemoveContainer" containerID="ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346" Oct 14 13:32:52.353406 master-1 kubenswrapper[4740]: E1014 13:32:52.353347 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346\": container with ID starting with ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346 not found: ID does not exist" containerID="ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346" Oct 14 13:32:52.353473 master-1 kubenswrapper[4740]: I1014 13:32:52.353404 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346"} err="failed to get container status \"ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346\": rpc error: code = NotFound desc = could not find container \"ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346\": container with ID starting with ec063f0339568b948db2db20ed9908fe5475c363688bdf3f0c9d13860ff47346 not found: ID does not exist" Oct 14 13:32:52.353473 master-1 kubenswrapper[4740]: I1014 13:32:52.353429 4740 scope.go:117] "RemoveContainer" containerID="82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f" Oct 14 13:32:52.353808 master-1 kubenswrapper[4740]: E1014 13:32:52.353760 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f\": container with ID starting with 82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f not found: ID does not exist" containerID="82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f" Oct 14 13:32:52.353868 master-1 kubenswrapper[4740]: I1014 13:32:52.353805 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f"} err="failed to get container status \"82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f\": rpc error: code = NotFound desc = could not find container \"82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f\": container with ID starting with 82657ec264b82ceefbfec1e09a716b360c653214be0b4bff135a2faa0b70300f not found: ID does not exist" Oct 14 13:32:52.359572 master-1 kubenswrapper[4740]: I1014 13:32:52.359524 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 14 13:32:52.360015 master-1 kubenswrapper[4740]: I1014 13:32:52.359954 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 14 13:32:52.360959 master-1 kubenswrapper[4740]: I1014 13:32:52.360920 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 13:32:52.365496 master-1 kubenswrapper[4740]: I1014 13:32:52.365426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 13:32:52.387924 master-1 kubenswrapper[4740]: I1014 13:32:52.387876 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 14 13:32:52.409647 master-1 kubenswrapper[4740]: I1014 13:32:52.409587 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 13:32:52.410838 master-1 kubenswrapper[4740]: I1014 13:32:52.410793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Oct 14 13:32:52.420861 master-1 kubenswrapper[4740]: I1014 13:32:52.420818 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 14 13:32:52.458365 master-1 kubenswrapper[4740]: I1014 13:32:52.458069 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 13:32:52.459521 master-1 kubenswrapper[4740]: I1014 13:32:52.459470 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Oct 14 13:32:52.465973 master-1 kubenswrapper[4740]: I1014 13:32:52.465675 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 14 13:32:52.471575 master-1 kubenswrapper[4740]: I1014 13:32:52.471518 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 14 13:32:52.483163 master-1 kubenswrapper[4740]: I1014 13:32:52.483114 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 14 13:32:52.483352 master-1 kubenswrapper[4740]: I1014 13:32:52.483190 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-svq88" Oct 14 13:32:52.494532 master-1 kubenswrapper[4740]: I1014 13:32:52.494484 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Oct 14 13:32:52.499209 master-1 kubenswrapper[4740]: I1014 13:32:52.499016 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 14 13:32:52.523512 master-1 kubenswrapper[4740]: I1014 13:32:52.523459 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 14 13:32:52.523778 master-1 kubenswrapper[4740]: I1014 13:32:52.523656 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Oct 14 13:32:52.523996 master-1 kubenswrapper[4740]: I1014 13:32:52.523905 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-t6l59" Oct 14 13:32:52.542723 master-1 kubenswrapper[4740]: I1014 13:32:52.542537 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 13:32:52.542723 master-1 kubenswrapper[4740]: I1014 13:32:52.542630 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 14 13:32:52.553162 master-1 kubenswrapper[4740]: I1014 13:32:52.553121 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 14 13:32:52.556512 master-1 kubenswrapper[4740]: I1014 13:32:52.556470 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Oct 14 13:32:52.557376 master-1 kubenswrapper[4740]: I1014 13:32:52.557321 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 13:32:52.580416 master-1 kubenswrapper[4740]: I1014 13:32:52.580353 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 14 13:32:52.583495 master-1 kubenswrapper[4740]: I1014 13:32:52.583425 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 14 13:32:52.583782 master-1 kubenswrapper[4740]: I1014 13:32:52.583722 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 14 13:32:52.584760 master-1 kubenswrapper[4740]: I1014 13:32:52.584687 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 14 13:32:52.592187 master-1 kubenswrapper[4740]: I1014 13:32:52.592135 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-r2r7j" Oct 14 13:32:52.610031 master-1 kubenswrapper[4740]: I1014 13:32:52.609998 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Oct 14 13:32:52.614875 master-1 kubenswrapper[4740]: I1014 13:32:52.614834 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 13:32:52.623315 master-1 kubenswrapper[4740]: I1014 13:32:52.622297 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 14 13:32:52.625714 master-1 kubenswrapper[4740]: I1014 13:32:52.625666 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 14 13:32:52.631935 master-1 kubenswrapper[4740]: I1014 13:32:52.631889 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-qhnj6" Oct 14 13:32:52.638514 master-1 kubenswrapper[4740]: I1014 13:32:52.638460 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 14 13:32:52.659808 master-1 kubenswrapper[4740]: I1014 13:32:52.659750 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 14 13:32:52.660148 master-1 kubenswrapper[4740]: I1014 13:32:52.660118 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Oct 14 13:32:52.669868 master-1 kubenswrapper[4740]: I1014 13:32:52.669823 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 14 13:32:52.671834 master-1 kubenswrapper[4740]: I1014 13:32:52.671797 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Oct 14 13:32:52.672109 master-1 kubenswrapper[4740]: I1014 13:32:52.672074 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 14 13:32:52.690398 master-1 kubenswrapper[4740]: I1014 13:32:52.690344 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 14 13:32:52.700925 master-1 kubenswrapper[4740]: I1014 13:32:52.700835 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-68d7l" Oct 14 13:32:52.706968 master-1 kubenswrapper[4740]: I1014 13:32:52.706926 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 14 13:32:52.716333 master-1 kubenswrapper[4740]: I1014 13:32:52.716240 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 14 13:32:52.756697 master-1 kubenswrapper[4740]: I1014 13:32:52.756645 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-tq8pv" Oct 14 13:32:52.778637 master-1 kubenswrapper[4740]: I1014 13:32:52.778581 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 13:32:52.785720 master-1 kubenswrapper[4740]: I1014 13:32:52.785681 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 14 13:32:52.786438 master-1 kubenswrapper[4740]: I1014 13:32:52.786392 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 14 13:32:52.786833 master-1 kubenswrapper[4740]: I1014 13:32:52.786795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Oct 14 13:32:52.801308 master-1 kubenswrapper[4740]: I1014 13:32:52.801271 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 14 13:32:52.819119 master-1 kubenswrapper[4740]: I1014 13:32:52.819049 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 14 13:32:52.825803 master-1 kubenswrapper[4740]: I1014 13:32:52.825764 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 14 13:32:52.844186 master-1 kubenswrapper[4740]: I1014 13:32:52.844106 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Oct 14 13:32:52.849763 master-1 kubenswrapper[4740]: I1014 13:32:52.849715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 14 13:32:52.849951 master-1 kubenswrapper[4740]: I1014 13:32:52.849874 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:32:52.850980 master-1 kubenswrapper[4740]: I1014 13:32:52.850945 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 13:32:52.854630 master-1 kubenswrapper[4740]: I1014 13:32:52.854577 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 13:32:52.864248 master-1 kubenswrapper[4740]: I1014 13:32:52.864172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 14 13:32:52.868635 master-1 kubenswrapper[4740]: I1014 13:32:52.868597 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-2zbrt" Oct 14 13:32:52.874927 master-1 kubenswrapper[4740]: I1014 13:32:52.874891 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-8gpjk" Oct 14 13:32:52.887444 master-1 kubenswrapper[4740]: I1014 13:32:52.887396 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Oct 14 13:32:52.898745 master-1 kubenswrapper[4740]: I1014 13:32:52.898709 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 14 13:32:52.919098 master-1 kubenswrapper[4740]: I1014 13:32:52.919037 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 14 13:32:52.943010 master-1 kubenswrapper[4740]: I1014 13:32:52.942943 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8otna1nr4bh0o" Oct 14 13:32:52.944919 master-1 kubenswrapper[4740]: I1014 13:32:52.944871 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 14 13:32:52.949481 master-1 kubenswrapper[4740]: I1014 13:32:52.949445 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-l545q" Oct 14 13:32:52.951690 master-1 kubenswrapper[4740]: I1014 13:32:52.951648 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d61efaa0f96869cf2939026aad6022" path="/var/lib/kubelet/pods/42d61efaa0f96869cf2939026aad6022/volumes" Oct 14 13:32:52.955819 master-1 kubenswrapper[4740]: I1014 13:32:52.955773 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 14 13:32:52.956787 master-1 kubenswrapper[4740]: I1014 13:32:52.956759 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Oct 14 13:32:52.966555 master-1 kubenswrapper[4740]: I1014 13:32:52.966424 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 14 13:32:52.975746 master-1 kubenswrapper[4740]: I1014 13:32:52.975698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 14 13:32:52.984136 master-1 kubenswrapper[4740]: I1014 13:32:52.984098 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 14 13:32:52.989854 master-1 kubenswrapper[4740]: I1014 13:32:52.989815 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 13:32:52.993136 master-1 kubenswrapper[4740]: I1014 13:32:52.993098 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Oct 14 13:32:52.994022 master-1 kubenswrapper[4740]: I1014 13:32:52.993977 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 13:32:53.011653 master-1 kubenswrapper[4740]: I1014 13:32:53.003769 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Oct 14 13:32:53.031282 master-1 kubenswrapper[4740]: I1014 13:32:53.031220 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Oct 14 13:32:53.956786 master-1 kubenswrapper[4740]: I1014 13:32:53.956670 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" start-of-body= Oct 14 13:32:53.956786 master-1 kubenswrapper[4740]: I1014 13:32:53.956743 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="Get \"https://192.168.34.11:6443/readyz\": dial tcp 192.168.34.11:6443: connect: connection refused" Oct 14 13:32:55.942828 master-1 kubenswrapper[4740]: I1014 13:32:55.942756 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:32:55.965803 master-1 kubenswrapper[4740]: I1014 13:32:55.965751 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="6f95b31e-2f1b-4c5a-bbfd-14a2e928c7ce" Oct 14 13:32:55.965803 master-1 kubenswrapper[4740]: I1014 13:32:55.965793 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" podUID="6f95b31e-2f1b-4c5a-bbfd-14a2e928c7ce" Oct 14 13:32:55.990160 master-1 kubenswrapper[4740]: I1014 13:32:55.990080 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:32:55.994137 master-1 kubenswrapper[4740]: I1014 13:32:55.994097 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:32:56.004755 master-1 kubenswrapper[4740]: I1014 13:32:56.004682 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:32:56.049635 master-1 kubenswrapper[4740]: I1014 13:32:56.049444 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-1"] Oct 14 13:32:56.051894 master-1 kubenswrapper[4740]: I1014 13:32:56.051836 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:32:56.092544 master-1 kubenswrapper[4740]: W1014 13:32:56.092469 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23141951a25391899fad7b9f2d5b6739.slice/crio-8a343686af6f83a50eb1f98e01367b1be20e1667e0e20f357d4ccf544eba2114 WatchSource:0}: Error finding container 8a343686af6f83a50eb1f98e01367b1be20e1667e0e20f357d4ccf544eba2114: Status 404 returned error can't find the container with id 8a343686af6f83a50eb1f98e01367b1be20e1667e0e20f357d4ccf544eba2114 Oct 14 13:32:56.236845 master-1 kubenswrapper[4740]: I1014 13:32:56.236772 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"8a343686af6f83a50eb1f98e01367b1be20e1667e0e20f357d4ccf544eba2114"} Oct 14 13:32:57.247348 master-1 kubenswrapper[4740]: I1014 13:32:57.247264 4740 generic.go:334] "Generic (PLEG): container finished" podID="23141951a25391899fad7b9f2d5b6739" containerID="0bf8969e1c8e46d48e91666889af71ab66a8e5315bd20df1dec1e36434e83a88" exitCode=0 Oct 14 13:32:57.247348 master-1 kubenswrapper[4740]: I1014 13:32:57.247328 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerDied","Data":"0bf8969e1c8e46d48e91666889af71ab66a8e5315bd20df1dec1e36434e83a88"} Oct 14 13:32:58.274293 master-1 kubenswrapper[4740]: I1014 13:32:58.273712 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"832b9cee47d4a6006ae5ceeef83c01dae69a9f8f1033da9f0084504d7ecfd836"} Oct 14 13:32:58.274293 master-1 kubenswrapper[4740]: I1014 13:32:58.273774 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"b649033dee45d5007242056630ef5227ff7bd60123b822101f22fd69241aed55"} Oct 14 13:32:58.274293 master-1 kubenswrapper[4740]: I1014 13:32:58.273785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"57c8b29fe636aac10778e79bd9c9750add8fa3e355bd2da84d62dfb9b7384a50"} Oct 14 13:32:58.961427 master-1 kubenswrapper[4740]: I1014 13:32:58.961358 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/readyz\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 14 13:32:58.961688 master-1 kubenswrapper[4740]: I1014 13:32:58.961429 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 13:32:59.284091 master-1 kubenswrapper[4740]: I1014 13:32:59.284007 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"482cf4573547fd6928fa175df6cfb3d16f1e6a1758b7abfd7eb4545fcc33414f"} Oct 14 13:32:59.284091 master-1 kubenswrapper[4740]: I1014 13:32:59.284089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-1" event={"ID":"23141951a25391899fad7b9f2d5b6739","Type":"ContainerStarted","Data":"dfc5ac42769127bdccf1c39048998227683b19d1a6d89cb29ca0c5b1d478fe8d"} Oct 14 13:32:59.284696 master-1 kubenswrapper[4740]: I1014 13:32:59.284391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:32:59.337829 master-1 kubenswrapper[4740]: I1014 13:32:59.337765 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-1" podStartSLOduration=3.337750315 podStartE2EDuration="3.337750315s" podCreationTimestamp="2025-10-14 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:59.335536606 +0000 UTC m=+1605.145825935" watchObservedRunningTime="2025-10-14 13:32:59.337750315 +0000 UTC m=+1605.148039644" Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: I1014 13:33:00.951006 4740 patch_prober.go:28] interesting pod/kube-apiserver-guard-master-1 container/guard namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]log ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]api-openshift-apiserver-available ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]api-openshift-oauth-apiserver-available ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]informer-sync ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-filter ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-informers ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-apiextensions-controllers ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/crd-informer-synced ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-system-namespaces-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/bootstrap-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/start-kube-aggregator-informers ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-registration-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-discovery-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]autoregister-completion ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapi-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: [+]shutdown ok Oct 14 13:33:00.951073 master-1 kubenswrapper[4740]: readyz check failed Oct 14 13:33:00.953157 master-1 kubenswrapper[4740]: I1014 13:33:00.951090 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" podUID="0967dd4e-97b5-4caa-a9ae-3dd2ef05ed56" containerName="guard" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:33:01.052405 master-1 kubenswrapper[4740]: I1014 13:33:01.052360 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:33:01.052681 master-1 kubenswrapper[4740]: I1014 13:33:01.052669 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:33:01.138717 master-1 kubenswrapper[4740]: I1014 13:33:01.138642 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:33:01.302735 master-1 kubenswrapper[4740]: I1014 13:33:01.302678 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:33:03.961992 master-1 kubenswrapper[4740]: I1014 13:33:03.961944 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-guard-master-1" Oct 14 13:33:05.004497 master-1 kubenswrapper[4740]: I1014 13:33:05.004427 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp"] Oct 14 13:33:05.005646 master-1 kubenswrapper[4740]: E1014 13:33:05.005626 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" containerName="installer" Oct 14 13:33:05.005743 master-1 kubenswrapper[4740]: I1014 13:33:05.005733 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" containerName="installer" Oct 14 13:33:05.006038 master-1 kubenswrapper[4740]: I1014 13:33:05.006019 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cf6c4d-eb3d-4ac3-b813-f53661dbaa33" containerName="installer" Oct 14 13:33:05.007209 master-1 kubenswrapper[4740]: I1014 13:33:05.007185 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:05.011679 master-1 kubenswrapper[4740]: I1014 13:33:05.011611 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 14 13:33:05.012604 master-1 kubenswrapper[4740]: I1014 13:33:05.012565 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 14 13:33:05.058200 master-1 kubenswrapper[4740]: I1014 13:33:05.058103 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp"] Oct 14 13:33:05.073744 master-1 kubenswrapper[4740]: I1014 13:33:05.073681 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtxm\" (UniqueName: \"kubernetes.io/projected/463941ee-751a-455b-b96c-cde7bfc082ca-kube-api-access-hjtxm\") pod \"cinder-operator-controller-manager-5484486656-vvnpp\" (UID: \"463941ee-751a-455b-b96c-cde7bfc082ca\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:05.136221 master-1 kubenswrapper[4740]: I1014 13:33:05.136129 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz"] Oct 14 13:33:05.137314 master-1 kubenswrapper[4740]: I1014 13:33:05.137258 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:05.164848 master-1 kubenswrapper[4740]: I1014 13:33:05.164792 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz"] Oct 14 13:33:05.175551 master-1 kubenswrapper[4740]: I1014 13:33:05.175495 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkbb\" (UniqueName: \"kubernetes.io/projected/11e88f54-5d07-42aa-bd60-8aa081af2220-kube-api-access-wlkbb\") pod \"designate-operator-controller-manager-67d84b9cc-698kz\" (UID: \"11e88f54-5d07-42aa-bd60-8aa081af2220\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:05.175733 master-1 kubenswrapper[4740]: I1014 13:33:05.175631 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtxm\" (UniqueName: \"kubernetes.io/projected/463941ee-751a-455b-b96c-cde7bfc082ca-kube-api-access-hjtxm\") pod \"cinder-operator-controller-manager-5484486656-vvnpp\" (UID: \"463941ee-751a-455b-b96c-cde7bfc082ca\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:05.194654 master-1 kubenswrapper[4740]: I1014 13:33:05.194306 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv"] Oct 14 13:33:05.196028 master-1 kubenswrapper[4740]: I1014 13:33:05.195984 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:05.216726 master-1 kubenswrapper[4740]: I1014 13:33:05.216669 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv"] Oct 14 13:33:05.246069 master-1 kubenswrapper[4740]: I1014 13:33:05.245993 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtxm\" (UniqueName: \"kubernetes.io/projected/463941ee-751a-455b-b96c-cde7bfc082ca-kube-api-access-hjtxm\") pod \"cinder-operator-controller-manager-5484486656-vvnpp\" (UID: \"463941ee-751a-455b-b96c-cde7bfc082ca\") " pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:05.277003 master-1 kubenswrapper[4740]: I1014 13:33:05.276639 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljk7\" (UniqueName: \"kubernetes.io/projected/98bfa5be-8e40-48cb-a3c5-a48d74649ff0-kube-api-access-rljk7\") pod \"glance-operator-controller-manager-59bd97c6b9-s2zqv\" (UID: \"98bfa5be-8e40-48cb-a3c5-a48d74649ff0\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:05.277003 master-1 kubenswrapper[4740]: I1014 13:33:05.276815 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkbb\" (UniqueName: \"kubernetes.io/projected/11e88f54-5d07-42aa-bd60-8aa081af2220-kube-api-access-wlkbb\") pod \"designate-operator-controller-manager-67d84b9cc-698kz\" (UID: \"11e88f54-5d07-42aa-bd60-8aa081af2220\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:05.292326 master-1 kubenswrapper[4740]: I1014 13:33:05.292197 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg"] Oct 14 13:33:05.293332 master-1 kubenswrapper[4740]: I1014 13:33:05.293259 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.298421 master-1 kubenswrapper[4740]: I1014 13:33:05.298244 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 14 13:33:05.307765 master-1 kubenswrapper[4740]: I1014 13:33:05.307704 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkbb\" (UniqueName: \"kubernetes.io/projected/11e88f54-5d07-42aa-bd60-8aa081af2220-kube-api-access-wlkbb\") pod \"designate-operator-controller-manager-67d84b9cc-698kz\" (UID: \"11e88f54-5d07-42aa-bd60-8aa081af2220\") " pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:05.320857 master-1 kubenswrapper[4740]: I1014 13:33:05.320762 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg"] Oct 14 13:33:05.333640 master-1 kubenswrapper[4740]: I1014 13:33:05.333587 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:05.366175 master-1 kubenswrapper[4740]: I1014 13:33:05.366122 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8"] Oct 14 13:33:05.367788 master-1 kubenswrapper[4740]: I1014 13:33:05.367715 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:05.379123 master-1 kubenswrapper[4740]: I1014 13:33:05.379062 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bfa1391-cf7c-4d68-834b-054ff31950aa-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-sbpvg\" (UID: \"4bfa1391-cf7c-4d68-834b-054ff31950aa\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.379361 master-1 kubenswrapper[4740]: I1014 13:33:05.379147 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljk7\" (UniqueName: \"kubernetes.io/projected/98bfa5be-8e40-48cb-a3c5-a48d74649ff0-kube-api-access-rljk7\") pod \"glance-operator-controller-manager-59bd97c6b9-s2zqv\" (UID: \"98bfa5be-8e40-48cb-a3c5-a48d74649ff0\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:05.379361 master-1 kubenswrapper[4740]: I1014 13:33:05.379252 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgzr\" (UniqueName: \"kubernetes.io/projected/4bfa1391-cf7c-4d68-834b-054ff31950aa-kube-api-access-jwgzr\") pod \"infra-operator-controller-manager-d68fd5cdf-sbpvg\" (UID: \"4bfa1391-cf7c-4d68-834b-054ff31950aa\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.391444 master-1 kubenswrapper[4740]: I1014 13:33:05.391369 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8"] Oct 14 13:33:05.410363 master-1 kubenswrapper[4740]: I1014 13:33:05.410242 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljk7\" (UniqueName: \"kubernetes.io/projected/98bfa5be-8e40-48cb-a3c5-a48d74649ff0-kube-api-access-rljk7\") pod \"glance-operator-controller-manager-59bd97c6b9-s2zqv\" (UID: \"98bfa5be-8e40-48cb-a3c5-a48d74649ff0\") " pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:05.468469 master-1 kubenswrapper[4740]: I1014 13:33:05.459201 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:05.488259 master-1 kubenswrapper[4740]: I1014 13:33:05.481044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bfa1391-cf7c-4d68-834b-054ff31950aa-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-sbpvg\" (UID: \"4bfa1391-cf7c-4d68-834b-054ff31950aa\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.488259 master-1 kubenswrapper[4740]: I1014 13:33:05.481183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwgzr\" (UniqueName: \"kubernetes.io/projected/4bfa1391-cf7c-4d68-834b-054ff31950aa-kube-api-access-jwgzr\") pod \"infra-operator-controller-manager-d68fd5cdf-sbpvg\" (UID: \"4bfa1391-cf7c-4d68-834b-054ff31950aa\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.488259 master-1 kubenswrapper[4740]: I1014 13:33:05.481251 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5sl\" (UniqueName: \"kubernetes.io/projected/40ba280a-ef2f-4ba3-8cf3-284b8129114d-kube-api-access-jt5sl\") pod \"keystone-operator-controller-manager-f4487c759-hdfw8\" (UID: \"40ba280a-ef2f-4ba3-8cf3-284b8129114d\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:05.495078 master-1 kubenswrapper[4740]: I1014 13:33:05.494974 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bfa1391-cf7c-4d68-834b-054ff31950aa-cert\") pod \"infra-operator-controller-manager-d68fd5cdf-sbpvg\" (UID: \"4bfa1391-cf7c-4d68-834b-054ff31950aa\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.516681 master-1 kubenswrapper[4740]: I1014 13:33:05.516617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:05.522882 master-1 kubenswrapper[4740]: I1014 13:33:05.522818 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwgzr\" (UniqueName: \"kubernetes.io/projected/4bfa1391-cf7c-4d68-834b-054ff31950aa-kube-api-access-jwgzr\") pod \"infra-operator-controller-manager-d68fd5cdf-sbpvg\" (UID: \"4bfa1391-cf7c-4d68-834b-054ff31950aa\") " pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.583060 master-1 kubenswrapper[4740]: I1014 13:33:05.582996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5sl\" (UniqueName: \"kubernetes.io/projected/40ba280a-ef2f-4ba3-8cf3-284b8129114d-kube-api-access-jt5sl\") pod \"keystone-operator-controller-manager-f4487c759-hdfw8\" (UID: \"40ba280a-ef2f-4ba3-8cf3-284b8129114d\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:05.642956 master-1 kubenswrapper[4740]: I1014 13:33:05.641820 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5sl\" (UniqueName: \"kubernetes.io/projected/40ba280a-ef2f-4ba3-8cf3-284b8129114d-kube-api-access-jt5sl\") pod \"keystone-operator-controller-manager-f4487c759-hdfw8\" (UID: \"40ba280a-ef2f-4ba3-8cf3-284b8129114d\") " pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:05.643274 master-1 kubenswrapper[4740]: I1014 13:33:05.641999 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk"] Oct 14 13:33:05.644352 master-1 kubenswrapper[4740]: I1014 13:33:05.644313 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:05.685034 master-1 kubenswrapper[4740]: I1014 13:33:05.684505 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq6z2\" (UniqueName: \"kubernetes.io/projected/edc000c2-1306-4462-990a-859976a59b39-kube-api-access-fq6z2\") pod \"ovn-operator-controller-manager-f9dd6d5b6-46wwk\" (UID: \"edc000c2-1306-4462-990a-859976a59b39\") " pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:05.699804 master-1 kubenswrapper[4740]: I1014 13:33:05.699753 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:05.706623 master-1 kubenswrapper[4740]: I1014 13:33:05.706549 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk"] Oct 14 13:33:05.712598 master-1 kubenswrapper[4740]: I1014 13:33:05.711797 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:05.718130 master-1 kubenswrapper[4740]: I1014 13:33:05.718002 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk"] Oct 14 13:33:05.719937 master-1 kubenswrapper[4740]: I1014 13:33:05.719453 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:05.732095 master-1 kubenswrapper[4740]: I1014 13:33:05.731992 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk"] Oct 14 13:33:05.785853 master-1 kubenswrapper[4740]: I1014 13:33:05.785548 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq6z2\" (UniqueName: \"kubernetes.io/projected/edc000c2-1306-4462-990a-859976a59b39-kube-api-access-fq6z2\") pod \"ovn-operator-controller-manager-f9dd6d5b6-46wwk\" (UID: \"edc000c2-1306-4462-990a-859976a59b39\") " pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:05.785853 master-1 kubenswrapper[4740]: I1014 13:33:05.785675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjzg\" (UniqueName: \"kubernetes.io/projected/40a4a0ce-984d-4e5e-aec8-605d1cae1091-kube-api-access-2sjzg\") pod \"placement-operator-controller-manager-569c9576c5-4zgfk\" (UID: \"40a4a0ce-984d-4e5e-aec8-605d1cae1091\") " pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:05.815253 master-1 kubenswrapper[4740]: I1014 13:33:05.815179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp"] Oct 14 13:33:05.821539 master-1 kubenswrapper[4740]: I1014 13:33:05.821498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq6z2\" (UniqueName: \"kubernetes.io/projected/edc000c2-1306-4462-990a-859976a59b39-kube-api-access-fq6z2\") pod \"ovn-operator-controller-manager-f9dd6d5b6-46wwk\" (UID: \"edc000c2-1306-4462-990a-859976a59b39\") " pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:05.832346 master-1 kubenswrapper[4740]: W1014 13:33:05.832300 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463941ee_751a_455b_b96c_cde7bfc082ca.slice/crio-47d0ac6db9d27db036bb657937075dfa71fff216a2aa6523ef3d1757a32e2e9f WatchSource:0}: Error finding container 47d0ac6db9d27db036bb657937075dfa71fff216a2aa6523ef3d1757a32e2e9f: Status 404 returned error can't find the container with id 47d0ac6db9d27db036bb657937075dfa71fff216a2aa6523ef3d1757a32e2e9f Oct 14 13:33:05.868969 master-1 kubenswrapper[4740]: I1014 13:33:05.868839 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd"] Oct 14 13:33:05.881554 master-1 kubenswrapper[4740]: I1014 13:33:05.878008 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:05.888621 master-1 kubenswrapper[4740]: I1014 13:33:05.885070 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd"] Oct 14 13:33:05.892052 master-1 kubenswrapper[4740]: I1014 13:33:05.891996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjzg\" (UniqueName: \"kubernetes.io/projected/40a4a0ce-984d-4e5e-aec8-605d1cae1091-kube-api-access-2sjzg\") pod \"placement-operator-controller-manager-569c9576c5-4zgfk\" (UID: \"40a4a0ce-984d-4e5e-aec8-605d1cae1091\") " pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:05.915168 master-1 kubenswrapper[4740]: I1014 13:33:05.913902 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjzg\" (UniqueName: \"kubernetes.io/projected/40a4a0ce-984d-4e5e-aec8-605d1cae1091-kube-api-access-2sjzg\") pod \"placement-operator-controller-manager-569c9576c5-4zgfk\" (UID: \"40a4a0ce-984d-4e5e-aec8-605d1cae1091\") " pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:05.987404 master-1 kubenswrapper[4740]: I1014 13:33:05.986759 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:05.993869 master-1 kubenswrapper[4740]: I1014 13:33:05.993818 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlfh\" (UniqueName: \"kubernetes.io/projected/428ec1e3-0da8-410b-9692-55d572e3c4b5-kube-api-access-9mlfh\") pod \"watcher-operator-controller-manager-7c4579d8cf-pqbbd\" (UID: \"428ec1e3-0da8-410b-9692-55d572e3c4b5\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:05.998830 master-1 kubenswrapper[4740]: I1014 13:33:05.998744 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz"] Oct 14 13:33:06.017904 master-1 kubenswrapper[4740]: W1014 13:33:06.017572 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e88f54_5d07_42aa_bd60_8aa081af2220.slice/crio-ef57eb96155f659dc4bbb2ce8dee99faddb3fbbfcbb0cc27cb35ad06e1bf4a8d WatchSource:0}: Error finding container ef57eb96155f659dc4bbb2ce8dee99faddb3fbbfcbb0cc27cb35ad06e1bf4a8d: Status 404 returned error can't find the container with id ef57eb96155f659dc4bbb2ce8dee99faddb3fbbfcbb0cc27cb35ad06e1bf4a8d Oct 14 13:33:06.052835 master-1 kubenswrapper[4740]: I1014 13:33:06.052684 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:06.096652 master-1 kubenswrapper[4740]: I1014 13:33:06.096563 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlfh\" (UniqueName: \"kubernetes.io/projected/428ec1e3-0da8-410b-9692-55d572e3c4b5-kube-api-access-9mlfh\") pod \"watcher-operator-controller-manager-7c4579d8cf-pqbbd\" (UID: \"428ec1e3-0da8-410b-9692-55d572e3c4b5\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:06.130479 master-1 kubenswrapper[4740]: I1014 13:33:06.130423 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv"] Oct 14 13:33:06.130550 master-1 kubenswrapper[4740]: I1014 13:33:06.130488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlfh\" (UniqueName: \"kubernetes.io/projected/428ec1e3-0da8-410b-9692-55d572e3c4b5-kube-api-access-9mlfh\") pod \"watcher-operator-controller-manager-7c4579d8cf-pqbbd\" (UID: \"428ec1e3-0da8-410b-9692-55d572e3c4b5\") " pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:06.135108 master-1 kubenswrapper[4740]: W1014 13:33:06.135021 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98bfa5be_8e40_48cb_a3c5_a48d74649ff0.slice/crio-59035a663f6d1f7c52764da4c9d614547b3340c522167c5aa78da3cc983005fa WatchSource:0}: Error finding container 59035a663f6d1f7c52764da4c9d614547b3340c522167c5aa78da3cc983005fa: Status 404 returned error can't find the container with id 59035a663f6d1f7c52764da4c9d614547b3340c522167c5aa78da3cc983005fa Oct 14 13:33:06.177030 master-1 kubenswrapper[4740]: I1014 13:33:06.176939 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n"] Oct 14 13:33:06.178296 master-1 kubenswrapper[4740]: I1014 13:33:06.178241 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" Oct 14 13:33:06.200372 master-1 kubenswrapper[4740]: I1014 13:33:06.197863 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddh6d\" (UniqueName: \"kubernetes.io/projected/55c6471e-18b9-4bcb-95d3-727dfbcfb853-kube-api-access-ddh6d\") pod \"rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n\" (UID: \"55c6471e-18b9-4bcb-95d3-727dfbcfb853\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" Oct 14 13:33:06.203009 master-1 kubenswrapper[4740]: I1014 13:33:06.202928 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n"] Oct 14 13:33:06.208318 master-1 kubenswrapper[4740]: I1014 13:33:06.208144 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:06.288544 master-1 kubenswrapper[4740]: I1014 13:33:06.288456 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8"] Oct 14 13:33:06.291382 master-1 kubenswrapper[4740]: W1014 13:33:06.290857 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40ba280a_ef2f_4ba3_8cf3_284b8129114d.slice/crio-ad3942a2e6624ed266db77a455993cbebce4dd98be32e8f0e14cd4366201b0a3 WatchSource:0}: Error finding container ad3942a2e6624ed266db77a455993cbebce4dd98be32e8f0e14cd4366201b0a3: Status 404 returned error can't find the container with id ad3942a2e6624ed266db77a455993cbebce4dd98be32e8f0e14cd4366201b0a3 Oct 14 13:33:06.293416 master-1 kubenswrapper[4740]: I1014 13:33:06.293327 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg"] Oct 14 13:33:06.295150 master-1 kubenswrapper[4740]: W1014 13:33:06.295003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bfa1391_cf7c_4d68_834b_054ff31950aa.slice/crio-aa2b10acb7da9654357feeb37a38229a13a1044262c236a7a3add9b5b29dd2e1 WatchSource:0}: Error finding container aa2b10acb7da9654357feeb37a38229a13a1044262c236a7a3add9b5b29dd2e1: Status 404 returned error can't find the container with id aa2b10acb7da9654357feeb37a38229a13a1044262c236a7a3add9b5b29dd2e1 Oct 14 13:33:06.299537 master-1 kubenswrapper[4740]: I1014 13:33:06.299496 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddh6d\" (UniqueName: \"kubernetes.io/projected/55c6471e-18b9-4bcb-95d3-727dfbcfb853-kube-api-access-ddh6d\") pod \"rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n\" (UID: \"55c6471e-18b9-4bcb-95d3-727dfbcfb853\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" Oct 14 13:33:06.333214 master-1 kubenswrapper[4740]: I1014 13:33:06.333138 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" event={"ID":"11e88f54-5d07-42aa-bd60-8aa081af2220","Type":"ContainerStarted","Data":"ef57eb96155f659dc4bbb2ce8dee99faddb3fbbfcbb0cc27cb35ad06e1bf4a8d"} Oct 14 13:33:06.335431 master-1 kubenswrapper[4740]: I1014 13:33:06.335356 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" event={"ID":"98bfa5be-8e40-48cb-a3c5-a48d74649ff0","Type":"ContainerStarted","Data":"59035a663f6d1f7c52764da4c9d614547b3340c522167c5aa78da3cc983005fa"} Oct 14 13:33:06.337038 master-1 kubenswrapper[4740]: I1014 13:33:06.336999 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" event={"ID":"40ba280a-ef2f-4ba3-8cf3-284b8129114d","Type":"ContainerStarted","Data":"ad3942a2e6624ed266db77a455993cbebce4dd98be32e8f0e14cd4366201b0a3"} Oct 14 13:33:06.339096 master-1 kubenswrapper[4740]: I1014 13:33:06.339045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddh6d\" (UniqueName: \"kubernetes.io/projected/55c6471e-18b9-4bcb-95d3-727dfbcfb853-kube-api-access-ddh6d\") pod \"rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n\" (UID: \"55c6471e-18b9-4bcb-95d3-727dfbcfb853\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" Oct 14 13:33:06.339183 master-1 kubenswrapper[4740]: I1014 13:33:06.339116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" event={"ID":"4bfa1391-cf7c-4d68-834b-054ff31950aa","Type":"ContainerStarted","Data":"aa2b10acb7da9654357feeb37a38229a13a1044262c236a7a3add9b5b29dd2e1"} Oct 14 13:33:06.340863 master-1 kubenswrapper[4740]: I1014 13:33:06.340797 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" event={"ID":"463941ee-751a-455b-b96c-cde7bfc082ca","Type":"ContainerStarted","Data":"47d0ac6db9d27db036bb657937075dfa71fff216a2aa6523ef3d1757a32e2e9f"} Oct 14 13:33:06.509382 master-1 kubenswrapper[4740]: I1014 13:33:06.509276 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" Oct 14 13:33:06.519406 master-1 kubenswrapper[4740]: W1014 13:33:06.515911 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc000c2_1306_4462_990a_859976a59b39.slice/crio-4e49a0e9c45a4aea65e3dd170494ed53bffac8a83a9c700b8ee6b05995878873 WatchSource:0}: Error finding container 4e49a0e9c45a4aea65e3dd170494ed53bffac8a83a9c700b8ee6b05995878873: Status 404 returned error can't find the container with id 4e49a0e9c45a4aea65e3dd170494ed53bffac8a83a9c700b8ee6b05995878873 Oct 14 13:33:06.519406 master-1 kubenswrapper[4740]: I1014 13:33:06.516959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk"] Oct 14 13:33:06.524316 master-1 kubenswrapper[4740]: I1014 13:33:06.524262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk"] Oct 14 13:33:06.761586 master-1 kubenswrapper[4740]: I1014 13:33:06.761539 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd"] Oct 14 13:33:06.960309 master-1 kubenswrapper[4740]: W1014 13:33:06.959946 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55c6471e_18b9_4bcb_95d3_727dfbcfb853.slice/crio-b7277e45b01139bb25a019ba3ac3a738f64a6e6b1ee86532461039d2666f5ac2 WatchSource:0}: Error finding container b7277e45b01139bb25a019ba3ac3a738f64a6e6b1ee86532461039d2666f5ac2: Status 404 returned error can't find the container with id b7277e45b01139bb25a019ba3ac3a738f64a6e6b1ee86532461039d2666f5ac2 Oct 14 13:33:06.964522 master-1 kubenswrapper[4740]: I1014 13:33:06.964429 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n"] Oct 14 13:33:07.349425 master-1 kubenswrapper[4740]: I1014 13:33:07.349375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" event={"ID":"40a4a0ce-984d-4e5e-aec8-605d1cae1091","Type":"ContainerStarted","Data":"b84791f7415aabe9e7e508408b28e77b0495a61684240e1e3496872cc6262dee"} Oct 14 13:33:07.350977 master-1 kubenswrapper[4740]: I1014 13:33:07.350934 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" event={"ID":"edc000c2-1306-4462-990a-859976a59b39","Type":"ContainerStarted","Data":"4e49a0e9c45a4aea65e3dd170494ed53bffac8a83a9c700b8ee6b05995878873"} Oct 14 13:33:07.352206 master-1 kubenswrapper[4740]: I1014 13:33:07.352171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" event={"ID":"428ec1e3-0da8-410b-9692-55d572e3c4b5","Type":"ContainerStarted","Data":"d7538c49c75bf1a1372f944b7a3cfe929e3fb03c04e10404c9e7d1ba1eb03203"} Oct 14 13:33:07.353169 master-1 kubenswrapper[4740]: I1014 13:33:07.353149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" event={"ID":"55c6471e-18b9-4bcb-95d3-727dfbcfb853","Type":"ContainerStarted","Data":"b7277e45b01139bb25a019ba3ac3a738f64a6e6b1ee86532461039d2666f5ac2"} Oct 14 13:33:14.414415 master-1 kubenswrapper[4740]: I1014 13:33:14.414038 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" event={"ID":"40a4a0ce-984d-4e5e-aec8-605d1cae1091","Type":"ContainerStarted","Data":"d239f63a989ace32b544db060c6fd17228db9cfa78a3712f134bbccd66383f40"} Oct 14 13:33:14.419452 master-1 kubenswrapper[4740]: I1014 13:33:14.419294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" event={"ID":"edc000c2-1306-4462-990a-859976a59b39","Type":"ContainerStarted","Data":"a265837a13b6b45f8a03e9918e2fef4bd2cbca8907e618b83d8a17daaaaede14"} Oct 14 13:33:14.434732 master-1 kubenswrapper[4740]: I1014 13:33:14.434631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" event={"ID":"463941ee-751a-455b-b96c-cde7bfc082ca","Type":"ContainerStarted","Data":"2629b1689c8cd6f96c2d197758d6574e4436b02243dafbbe661c865a443ec97d"} Oct 14 13:33:14.437142 master-1 kubenswrapper[4740]: I1014 13:33:14.437058 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" event={"ID":"428ec1e3-0da8-410b-9692-55d572e3c4b5","Type":"ContainerStarted","Data":"c6da2d09f5aef5f2c96df527ebf0585db57e1e936bc83e19a6f7fb24b2909e85"} Oct 14 13:33:14.439620 master-1 kubenswrapper[4740]: I1014 13:33:14.439530 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" event={"ID":"11e88f54-5d07-42aa-bd60-8aa081af2220","Type":"ContainerStarted","Data":"f8e44d4853e211b0e9e24873c34ec54c136f99655388d1e28ca4e2ddc82fe3f1"} Oct 14 13:33:14.441704 master-1 kubenswrapper[4740]: I1014 13:33:14.441642 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" event={"ID":"55c6471e-18b9-4bcb-95d3-727dfbcfb853","Type":"ContainerStarted","Data":"aafb6c2045275c6e77a0895af2a0c0dbfc7e204a40060dba09eca86cb1bd82e1"} Oct 14 13:33:14.449930 master-1 kubenswrapper[4740]: I1014 13:33:14.449859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" event={"ID":"40ba280a-ef2f-4ba3-8cf3-284b8129114d","Type":"ContainerStarted","Data":"84f847fd697b1ea7d52a50fb9a510cd870bda6222407a396a1625a06917fd545"} Oct 14 13:33:14.452120 master-1 kubenswrapper[4740]: I1014 13:33:14.452040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" event={"ID":"98bfa5be-8e40-48cb-a3c5-a48d74649ff0","Type":"ContainerStarted","Data":"c5179c4d602182897b673bfeada6bc6c263a15a017a5e23cd4a24233fb291467"} Oct 14 13:33:14.454502 master-1 kubenswrapper[4740]: I1014 13:33:14.454434 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" event={"ID":"4bfa1391-cf7c-4d68-834b-054ff31950aa","Type":"ContainerStarted","Data":"8c6d2177ede236f070dfba0812acf96010fa6769ffdccb0ef763043ef8473546"} Oct 14 13:33:14.466785 master-1 kubenswrapper[4740]: I1014 13:33:14.466679 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n" podStartSLOduration=2.290003921 podStartE2EDuration="8.466659857s" podCreationTimestamp="2025-10-14 13:33:06 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.966176497 +0000 UTC m=+1612.776465826" lastFinishedPulling="2025-10-14 13:33:13.142832433 +0000 UTC m=+1618.953121762" observedRunningTime="2025-10-14 13:33:14.464718805 +0000 UTC m=+1620.275008154" watchObservedRunningTime="2025-10-14 13:33:14.466659857 +0000 UTC m=+1620.276949186" Oct 14 13:33:16.058189 master-1 kubenswrapper[4740]: I1014 13:33:16.058128 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-1" Oct 14 13:33:16.477483 master-1 kubenswrapper[4740]: I1014 13:33:16.477325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" event={"ID":"40a4a0ce-984d-4e5e-aec8-605d1cae1091","Type":"ContainerStarted","Data":"44c758d9e7a38d2b2df6e8fc9aa4073c38a1b517703e49428e7c5fbece1a3271"} Oct 14 13:33:16.478218 master-1 kubenswrapper[4740]: I1014 13:33:16.478145 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:16.774306 master-1 kubenswrapper[4740]: I1014 13:33:16.773987 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" podStartSLOduration=2.068036739 podStartE2EDuration="11.773965106s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.532962605 +0000 UTC m=+1612.343251934" lastFinishedPulling="2025-10-14 13:33:16.238890972 +0000 UTC m=+1622.049180301" observedRunningTime="2025-10-14 13:33:16.758710382 +0000 UTC m=+1622.568999711" watchObservedRunningTime="2025-10-14 13:33:16.773965106 +0000 UTC m=+1622.584254435" Oct 14 13:33:17.485447 master-1 kubenswrapper[4740]: I1014 13:33:17.485368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" event={"ID":"11e88f54-5d07-42aa-bd60-8aa081af2220","Type":"ContainerStarted","Data":"b7b3c1c3942f60fe1b5769f9bd4f3830c1d4ee6f0e6e07691e60bc1e230844bf"} Oct 14 13:33:17.485997 master-1 kubenswrapper[4740]: I1014 13:33:17.485691 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:17.487374 master-1 kubenswrapper[4740]: I1014 13:33:17.487331 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" event={"ID":"edc000c2-1306-4462-990a-859976a59b39","Type":"ContainerStarted","Data":"c1d3ff8912f19c3ab230593f19f9b46b5e3c9907981869b962c8446398622001"} Oct 14 13:33:17.487471 master-1 kubenswrapper[4740]: I1014 13:33:17.487449 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:17.489051 master-1 kubenswrapper[4740]: I1014 13:33:17.489010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" event={"ID":"40ba280a-ef2f-4ba3-8cf3-284b8129114d","Type":"ContainerStarted","Data":"7a54de81fc7860607839792d32c556e7158a97916cd8827b88c481c1406639ff"} Oct 14 13:33:17.489176 master-1 kubenswrapper[4740]: I1014 13:33:17.489123 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:17.490786 master-1 kubenswrapper[4740]: I1014 13:33:17.490740 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" event={"ID":"463941ee-751a-455b-b96c-cde7bfc082ca","Type":"ContainerStarted","Data":"9304b22b97cf68963c638918aca6aa92a3ad1bf02932499b374449054823ef92"} Oct 14 13:33:17.490906 master-1 kubenswrapper[4740]: I1014 13:33:17.490844 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:17.492397 master-1 kubenswrapper[4740]: I1014 13:33:17.492355 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" event={"ID":"98bfa5be-8e40-48cb-a3c5-a48d74649ff0","Type":"ContainerStarted","Data":"278ada4f3ee58837711185e011a910bc5fc69d84c9ea102eb5315b03135b4377"} Oct 14 13:33:17.492528 master-1 kubenswrapper[4740]: I1014 13:33:17.492496 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:17.494461 master-1 kubenswrapper[4740]: I1014 13:33:17.494371 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" event={"ID":"428ec1e3-0da8-410b-9692-55d572e3c4b5","Type":"ContainerStarted","Data":"acc5cfa1a2b9e92d2434c5d397c8f09502f9fcb71cbd2d46682f0eff1c97bfda"} Oct 14 13:33:17.494612 master-1 kubenswrapper[4740]: I1014 13:33:17.494553 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:17.496279 master-1 kubenswrapper[4740]: I1014 13:33:17.496197 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" event={"ID":"4bfa1391-cf7c-4d68-834b-054ff31950aa","Type":"ContainerStarted","Data":"d7d3ab96410bfcf4f772a0946065a617af8dfa137c3e13209e267eececfb0592"} Oct 14 13:33:17.496813 master-1 kubenswrapper[4740]: I1014 13:33:17.496774 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:17.515255 master-1 kubenswrapper[4740]: I1014 13:33:17.515136 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" podStartSLOduration=2.091137819 podStartE2EDuration="12.515115206s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.019799171 +0000 UTC m=+1611.830088500" lastFinishedPulling="2025-10-14 13:33:16.443776548 +0000 UTC m=+1622.254065887" observedRunningTime="2025-10-14 13:33:17.511585353 +0000 UTC m=+1623.321874692" watchObservedRunningTime="2025-10-14 13:33:17.515115206 +0000 UTC m=+1623.325404525" Oct 14 13:33:17.567724 master-1 kubenswrapper[4740]: I1014 13:33:17.567633 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" podStartSLOduration=2.065609705 podStartE2EDuration="12.567612764s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.766290003 +0000 UTC m=+1612.576579332" lastFinishedPulling="2025-10-14 13:33:17.268293062 +0000 UTC m=+1623.078582391" observedRunningTime="2025-10-14 13:33:17.56408389 +0000 UTC m=+1623.374373269" watchObservedRunningTime="2025-10-14 13:33:17.567612764 +0000 UTC m=+1623.377902103" Oct 14 13:33:17.569089 master-1 kubenswrapper[4740]: I1014 13:33:17.569047 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" podStartSLOduration=1.8921633500000001 podStartE2EDuration="12.569039552s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.521664826 +0000 UTC m=+1612.331954155" lastFinishedPulling="2025-10-14 13:33:17.198541028 +0000 UTC m=+1623.008830357" observedRunningTime="2025-10-14 13:33:17.540025195 +0000 UTC m=+1623.350314524" watchObservedRunningTime="2025-10-14 13:33:17.569039552 +0000 UTC m=+1623.379328901" Oct 14 13:33:17.597265 master-1 kubenswrapper[4740]: I1014 13:33:17.597182 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" podStartSLOduration=1.73557049 podStartE2EDuration="12.597162125s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.297269995 +0000 UTC m=+1612.107559324" lastFinishedPulling="2025-10-14 13:33:17.15886163 +0000 UTC m=+1622.969150959" observedRunningTime="2025-10-14 13:33:17.594146635 +0000 UTC m=+1623.404435984" watchObservedRunningTime="2025-10-14 13:33:17.597162125 +0000 UTC m=+1623.407451464" Oct 14 13:33:17.629875 master-1 kubenswrapper[4740]: I1014 13:33:17.629806 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" podStartSLOduration=2.260506366 podStartE2EDuration="12.629787417s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.294149443 +0000 UTC m=+1612.104438772" lastFinishedPulling="2025-10-14 13:33:16.663430494 +0000 UTC m=+1622.473719823" observedRunningTime="2025-10-14 13:33:17.62230462 +0000 UTC m=+1623.432593969" watchObservedRunningTime="2025-10-14 13:33:17.629787417 +0000 UTC m=+1623.440076756" Oct 14 13:33:17.694791 master-1 kubenswrapper[4740]: I1014 13:33:17.694646 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" podStartSLOduration=2.310749294 podStartE2EDuration="12.694625441s" podCreationTimestamp="2025-10-14 13:33:05 +0000 UTC" firstStartedPulling="2025-10-14 13:33:06.139094654 +0000 UTC m=+1611.949383983" lastFinishedPulling="2025-10-14 13:33:16.522970811 +0000 UTC m=+1622.333260130" observedRunningTime="2025-10-14 13:33:17.675937907 +0000 UTC m=+1623.486227236" watchObservedRunningTime="2025-10-14 13:33:17.694625441 +0000 UTC m=+1623.504914780" Oct 14 13:33:17.744322 master-1 kubenswrapper[4740]: I1014 13:33:17.744208 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" podStartSLOduration=3.031674418 podStartE2EDuration="13.744189652s" podCreationTimestamp="2025-10-14 13:33:04 +0000 UTC" firstStartedPulling="2025-10-14 13:33:05.847144117 +0000 UTC m=+1611.657433446" lastFinishedPulling="2025-10-14 13:33:16.559659351 +0000 UTC m=+1622.369948680" observedRunningTime="2025-10-14 13:33:17.702715945 +0000 UTC m=+1623.513005294" watchObservedRunningTime="2025-10-14 13:33:17.744189652 +0000 UTC m=+1623.554478981" Oct 14 13:33:18.512832 master-1 kubenswrapper[4740]: I1014 13:33:18.512736 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk" Oct 14 13:33:18.515719 master-1 kubenswrapper[4740]: I1014 13:33:18.515677 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz" Oct 14 13:33:18.515991 master-1 kubenswrapper[4740]: I1014 13:33:18.515950 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp" Oct 14 13:33:18.516074 master-1 kubenswrapper[4740]: I1014 13:33:18.516055 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv" Oct 14 13:33:18.516129 master-1 kubenswrapper[4740]: I1014 13:33:18.516087 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8" Oct 14 13:33:18.516290 master-1 kubenswrapper[4740]: I1014 13:33:18.516259 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd" Oct 14 13:33:18.520964 master-1 kubenswrapper[4740]: I1014 13:33:18.520921 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg" Oct 14 13:33:26.069076 master-1 kubenswrapper[4740]: I1014 13:33:26.068509 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk" Oct 14 13:33:49.607919 master-1 kubenswrapper[4740]: I1014 13:33:49.607866 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-1"] Oct 14 13:33:49.608987 master-1 kubenswrapper[4740]: I1014 13:33:49.608962 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.612037 master-1 kubenswrapper[4740]: I1014 13:33:49.611992 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-p7d8w" Oct 14 13:33:49.636629 master-1 kubenswrapper[4740]: I1014 13:33:49.635470 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-1"] Oct 14 13:33:49.752254 master-1 kubenswrapper[4740]: I1014 13:33:49.744944 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c8779dc-91df-4a10-8138-c2bc64d313a1-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.752254 master-1 kubenswrapper[4740]: I1014 13:33:49.745013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c8779dc-91df-4a10-8138-c2bc64d313a1-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.847273 master-1 kubenswrapper[4740]: I1014 13:33:49.846030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c8779dc-91df-4a10-8138-c2bc64d313a1-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.847273 master-1 kubenswrapper[4740]: I1014 13:33:49.846117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c8779dc-91df-4a10-8138-c2bc64d313a1-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.847273 master-1 kubenswrapper[4740]: I1014 13:33:49.846268 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c8779dc-91df-4a10-8138-c2bc64d313a1-kubelet-dir\") pod \"revision-pruner-6-master-1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.886953 master-1 kubenswrapper[4740]: I1014 13:33:49.886830 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c8779dc-91df-4a10-8138-c2bc64d313a1-kube-api-access\") pod \"revision-pruner-6-master-1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:49.929876 master-1 kubenswrapper[4740]: I1014 13:33:49.929806 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:50.485275 master-1 kubenswrapper[4740]: I1014 13:33:50.484319 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-6-master-1"] Oct 14 13:33:50.499933 master-1 kubenswrapper[4740]: W1014 13:33:50.499895 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3c8779dc_91df_4a10_8138_c2bc64d313a1.slice/crio-99d49e9a72846676152a2eefdeee43c199dcfef0687a5c5d0d7b8459ac42c036 WatchSource:0}: Error finding container 99d49e9a72846676152a2eefdeee43c199dcfef0687a5c5d0d7b8459ac42c036: Status 404 returned error can't find the container with id 99d49e9a72846676152a2eefdeee43c199dcfef0687a5c5d0d7b8459ac42c036 Oct 14 13:33:50.787697 master-1 kubenswrapper[4740]: I1014 13:33:50.787581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"3c8779dc-91df-4a10-8138-c2bc64d313a1","Type":"ContainerStarted","Data":"99d49e9a72846676152a2eefdeee43c199dcfef0687a5c5d0d7b8459ac42c036"} Oct 14 13:33:51.798362 master-1 kubenswrapper[4740]: I1014 13:33:51.798295 4740 generic.go:334] "Generic (PLEG): container finished" podID="3c8779dc-91df-4a10-8138-c2bc64d313a1" containerID="3b43511e4c6645ad54a90b25c7236371a374d8fce28bb99ee863fb77c3e29e2b" exitCode=0 Oct 14 13:33:51.799338 master-1 kubenswrapper[4740]: I1014 13:33:51.798367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"3c8779dc-91df-4a10-8138-c2bc64d313a1","Type":"ContainerDied","Data":"3b43511e4c6645ad54a90b25c7236371a374d8fce28bb99ee863fb77c3e29e2b"} Oct 14 13:33:53.206264 master-1 kubenswrapper[4740]: I1014 13:33:53.206157 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:53.405883 master-1 kubenswrapper[4740]: I1014 13:33:53.405451 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c8779dc-91df-4a10-8138-c2bc64d313a1-kube-api-access\") pod \"3c8779dc-91df-4a10-8138-c2bc64d313a1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " Oct 14 13:33:53.405883 master-1 kubenswrapper[4740]: I1014 13:33:53.405569 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c8779dc-91df-4a10-8138-c2bc64d313a1-kubelet-dir\") pod \"3c8779dc-91df-4a10-8138-c2bc64d313a1\" (UID: \"3c8779dc-91df-4a10-8138-c2bc64d313a1\") " Oct 14 13:33:53.405883 master-1 kubenswrapper[4740]: I1014 13:33:53.405678 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c8779dc-91df-4a10-8138-c2bc64d313a1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c8779dc-91df-4a10-8138-c2bc64d313a1" (UID: "3c8779dc-91df-4a10-8138-c2bc64d313a1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:33:53.405883 master-1 kubenswrapper[4740]: I1014 13:33:53.405850 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c8779dc-91df-4a10-8138-c2bc64d313a1-kubelet-dir\") on node \"master-1\" DevicePath \"\"" Oct 14 13:33:53.408428 master-1 kubenswrapper[4740]: I1014 13:33:53.408353 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8779dc-91df-4a10-8138-c2bc64d313a1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c8779dc-91df-4a10-8138-c2bc64d313a1" (UID: "3c8779dc-91df-4a10-8138-c2bc64d313a1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:53.507943 master-1 kubenswrapper[4740]: I1014 13:33:53.507855 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c8779dc-91df-4a10-8138-c2bc64d313a1-kube-api-access\") on node \"master-1\" DevicePath \"\"" Oct 14 13:33:53.816752 master-1 kubenswrapper[4740]: I1014 13:33:53.816667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-6-master-1" event={"ID":"3c8779dc-91df-4a10-8138-c2bc64d313a1","Type":"ContainerDied","Data":"99d49e9a72846676152a2eefdeee43c199dcfef0687a5c5d0d7b8459ac42c036"} Oct 14 13:33:53.816752 master-1 kubenswrapper[4740]: I1014 13:33:53.816740 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d49e9a72846676152a2eefdeee43c199dcfef0687a5c5d0d7b8459ac42c036" Oct 14 13:33:53.816993 master-1 kubenswrapper[4740]: I1014 13:33:53.816921 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-6-master-1" Oct 14 13:33:53.889403 master-1 kubenswrapper[4740]: I1014 13:33:53.889303 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 14 13:33:53.892476 master-1 kubenswrapper[4740]: I1014 13:33:53.892169 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-master-1"] Oct 14 13:33:54.951457 master-1 kubenswrapper[4740]: I1014 13:33:54.951394 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946295a4-6f1e-44dd-a7f4-ab062bf3f4b9" path="/var/lib/kubelet/pods/946295a4-6f1e-44dd-a7f4-ab062bf3f4b9/volumes" Oct 14 13:34:08.979430 master-1 kubenswrapper[4740]: I1014 13:34:08.979355 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bd48d54dc-xbxqd"] Oct 14 13:34:08.980122 master-1 kubenswrapper[4740]: E1014 13:34:08.979689 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8779dc-91df-4a10-8138-c2bc64d313a1" containerName="pruner" Oct 14 13:34:08.980122 master-1 kubenswrapper[4740]: I1014 13:34:08.979707 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8779dc-91df-4a10-8138-c2bc64d313a1" containerName="pruner" Oct 14 13:34:08.980122 master-1 kubenswrapper[4740]: I1014 13:34:08.979902 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8779dc-91df-4a10-8138-c2bc64d313a1" containerName="pruner" Oct 14 13:34:08.980907 master-1 kubenswrapper[4740]: I1014 13:34:08.980871 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:08.988139 master-1 kubenswrapper[4740]: I1014 13:34:08.988079 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 14 13:34:08.988375 master-1 kubenswrapper[4740]: I1014 13:34:08.988165 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 13:34:08.988417 master-1 kubenswrapper[4740]: I1014 13:34:08.988396 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 14 13:34:08.997195 master-1 kubenswrapper[4740]: I1014 13:34:08.997099 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd48d54dc-xbxqd"] Oct 14 13:34:09.082539 master-1 kubenswrapper[4740]: I1014 13:34:09.082377 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f80a58-e3b0-424d-b54f-a32ccd85555f-config\") pod \"dnsmasq-dns-5bd48d54dc-xbxqd\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.082539 master-1 kubenswrapper[4740]: I1014 13:34:09.082516 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m6zc\" (UniqueName: \"kubernetes.io/projected/e3f80a58-e3b0-424d-b54f-a32ccd85555f-kube-api-access-8m6zc\") pod \"dnsmasq-dns-5bd48d54dc-xbxqd\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.189255 master-1 kubenswrapper[4740]: I1014 13:34:09.185543 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f80a58-e3b0-424d-b54f-a32ccd85555f-config\") pod \"dnsmasq-dns-5bd48d54dc-xbxqd\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.189255 master-1 kubenswrapper[4740]: I1014 13:34:09.186520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f80a58-e3b0-424d-b54f-a32ccd85555f-config\") pod \"dnsmasq-dns-5bd48d54dc-xbxqd\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.189255 master-1 kubenswrapper[4740]: I1014 13:34:09.186775 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m6zc\" (UniqueName: \"kubernetes.io/projected/e3f80a58-e3b0-424d-b54f-a32ccd85555f-kube-api-access-8m6zc\") pod \"dnsmasq-dns-5bd48d54dc-xbxqd\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.210349 master-1 kubenswrapper[4740]: I1014 13:34:09.206966 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m6zc\" (UniqueName: \"kubernetes.io/projected/e3f80a58-e3b0-424d-b54f-a32ccd85555f-kube-api-access-8m6zc\") pod \"dnsmasq-dns-5bd48d54dc-xbxqd\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.311549 master-1 kubenswrapper[4740]: I1014 13:34:09.311445 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:09.759955 master-1 kubenswrapper[4740]: I1014 13:34:09.759894 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd48d54dc-xbxqd"] Oct 14 13:34:09.764516 master-1 kubenswrapper[4740]: W1014 13:34:09.764425 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3f80a58_e3b0_424d_b54f_a32ccd85555f.slice/crio-5e84518724731b5ddbb4f9320c96bdf3833d04d79cb550ee05c3effb2797afcd WatchSource:0}: Error finding container 5e84518724731b5ddbb4f9320c96bdf3833d04d79cb550ee05c3effb2797afcd: Status 404 returned error can't find the container with id 5e84518724731b5ddbb4f9320c96bdf3833d04d79cb550ee05c3effb2797afcd Oct 14 13:34:09.969889 master-1 kubenswrapper[4740]: I1014 13:34:09.969785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" event={"ID":"e3f80a58-e3b0-424d-b54f-a32ccd85555f","Type":"ContainerStarted","Data":"5e84518724731b5ddbb4f9320c96bdf3833d04d79cb550ee05c3effb2797afcd"} Oct 14 13:34:11.774375 master-1 kubenswrapper[4740]: I1014 13:34:11.774324 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd48d54dc-xbxqd"] Oct 14 13:34:11.807520 master-1 kubenswrapper[4740]: I1014 13:34:11.806613 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bf7489945-tjzl4"] Oct 14 13:34:11.808649 master-1 kubenswrapper[4740]: I1014 13:34:11.808607 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.811998 master-1 kubenswrapper[4740]: I1014 13:34:11.811952 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 13:34:11.828138 master-1 kubenswrapper[4740]: I1014 13:34:11.827348 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bf7489945-tjzl4"] Oct 14 13:34:11.850012 master-1 kubenswrapper[4740]: I1014 13:34:11.849370 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-dns-svc\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.850012 master-1 kubenswrapper[4740]: I1014 13:34:11.849555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-config\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.850012 master-1 kubenswrapper[4740]: I1014 13:34:11.849619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vqw\" (UniqueName: \"kubernetes.io/projected/b2247769-a88f-4909-98d1-2cb5b442c9de-kube-api-access-z6vqw\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.951681 master-1 kubenswrapper[4740]: I1014 13:34:11.951622 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-dns-svc\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.951949 master-1 kubenswrapper[4740]: I1014 13:34:11.951718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-config\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.951949 master-1 kubenswrapper[4740]: I1014 13:34:11.951750 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vqw\" (UniqueName: \"kubernetes.io/projected/b2247769-a88f-4909-98d1-2cb5b442c9de-kube-api-access-z6vqw\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.953031 master-1 kubenswrapper[4740]: I1014 13:34:11.952967 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-config\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.953102 master-1 kubenswrapper[4740]: I1014 13:34:11.953014 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-dns-svc\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:11.986730 master-1 kubenswrapper[4740]: I1014 13:34:11.986666 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vqw\" (UniqueName: \"kubernetes.io/projected/b2247769-a88f-4909-98d1-2cb5b442c9de-kube-api-access-z6vqw\") pod \"dnsmasq-dns-7bf7489945-tjzl4\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:12.130810 master-1 kubenswrapper[4740]: I1014 13:34:12.130675 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:12.566602 master-1 kubenswrapper[4740]: I1014 13:34:12.566553 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bf7489945-tjzl4"] Oct 14 13:34:13.044461 master-1 kubenswrapper[4740]: I1014 13:34:13.044375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" event={"ID":"b2247769-a88f-4909-98d1-2cb5b442c9de","Type":"ContainerStarted","Data":"6183e8fdcfcb5aa531353339b59de3a5a945bba0fdecc57f5de40fb4b70b72b0"} Oct 14 13:34:18.487311 master-1 kubenswrapper[4740]: I1014 13:34:18.487094 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Oct 14 13:34:18.489171 master-1 kubenswrapper[4740]: I1014 13:34:18.489135 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.495538 master-1 kubenswrapper[4740]: I1014 13:34:18.491750 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 13:34:18.495538 master-1 kubenswrapper[4740]: I1014 13:34:18.492432 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 13:34:18.495538 master-1 kubenswrapper[4740]: I1014 13:34:18.492969 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 13:34:18.495538 master-1 kubenswrapper[4740]: I1014 13:34:18.493290 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 13:34:18.495538 master-1 kubenswrapper[4740]: I1014 13:34:18.493560 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 13:34:18.495538 master-1 kubenswrapper[4740]: I1014 13:34:18.493750 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 13:34:18.506404 master-1 kubenswrapper[4740]: I1014 13:34:18.503255 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Oct 14 13:34:18.687148 master-1 kubenswrapper[4740]: I1014 13:34:18.687058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-61f9432c-d35b-4f67-b699-40e9b9d1fd62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3fe034-5fa8-46c3-807f-85b672dd5e4a\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687405 master-1 kubenswrapper[4740]: I1014 13:34:18.687183 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmzj\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-kube-api-access-lfmzj\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687405 master-1 kubenswrapper[4740]: I1014 13:34:18.687221 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687405 master-1 kubenswrapper[4740]: I1014 13:34:18.687278 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/99ce92c4-34cd-4599-9614-10e7663bd9e7-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687405 master-1 kubenswrapper[4740]: I1014 13:34:18.687305 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687596 master-1 kubenswrapper[4740]: I1014 13:34:18.687455 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-server-conf\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687596 master-1 kubenswrapper[4740]: I1014 13:34:18.687517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-config-data\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.687596 master-1 kubenswrapper[4740]: I1014 13:34:18.687542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/99ce92c4-34cd-4599-9614-10e7663bd9e7-pod-info\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.688500 master-1 kubenswrapper[4740]: I1014 13:34:18.687720 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.688624 master-1 kubenswrapper[4740]: I1014 13:34:18.688533 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.689148 master-1 kubenswrapper[4740]: I1014 13:34:18.689110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791023 master-1 kubenswrapper[4740]: I1014 13:34:18.790919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791283 master-1 kubenswrapper[4740]: I1014 13:34:18.791262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/99ce92c4-34cd-4599-9614-10e7663bd9e7-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791411 master-1 kubenswrapper[4740]: I1014 13:34:18.791395 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791542 master-1 kubenswrapper[4740]: I1014 13:34:18.791524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-server-conf\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791633 master-1 kubenswrapper[4740]: I1014 13:34:18.791619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-config-data\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791720 master-1 kubenswrapper[4740]: I1014 13:34:18.791706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/99ce92c4-34cd-4599-9614-10e7663bd9e7-pod-info\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791825 master-1 kubenswrapper[4740]: I1014 13:34:18.791809 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.791933 master-1 kubenswrapper[4740]: I1014 13:34:18.791916 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.792055 master-1 kubenswrapper[4740]: I1014 13:34:18.792014 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.792213 master-1 kubenswrapper[4740]: I1014 13:34:18.792179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-61f9432c-d35b-4f67-b699-40e9b9d1fd62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3fe034-5fa8-46c3-807f-85b672dd5e4a\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.792361 master-1 kubenswrapper[4740]: I1014 13:34:18.792345 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmzj\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-kube-api-access-lfmzj\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.795016 master-1 kubenswrapper[4740]: I1014 13:34:18.794756 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-server-conf\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.796520 master-1 kubenswrapper[4740]: I1014 13:34:18.796149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.799335 master-1 kubenswrapper[4740]: I1014 13:34:18.797336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-config-data\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.799335 master-1 kubenswrapper[4740]: I1014 13:34:18.798868 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.799605 master-1 kubenswrapper[4740]: I1014 13:34:18.799544 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:34:18.799605 master-1 kubenswrapper[4740]: I1014 13:34:18.799575 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-61f9432c-d35b-4f67-b699-40e9b9d1fd62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3fe034-5fa8-46c3-807f-85b672dd5e4a\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/db4447816dce9292e4943b8ed801a0736987011d94f853a6a2e0dc2ad09b1f88/globalmount\"" pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.800264 master-1 kubenswrapper[4740]: I1014 13:34:18.799970 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/99ce92c4-34cd-4599-9614-10e7663bd9e7-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.818886 master-1 kubenswrapper[4740]: I1014 13:34:18.818025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.827954 master-1 kubenswrapper[4740]: I1014 13:34:18.827579 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/99ce92c4-34cd-4599-9614-10e7663bd9e7-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.834281 master-1 kubenswrapper[4740]: I1014 13:34:18.833286 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.843855 master-1 kubenswrapper[4740]: I1014 13:34:18.843712 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/99ce92c4-34cd-4599-9614-10e7663bd9e7-pod-info\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.848751 master-1 kubenswrapper[4740]: I1014 13:34:18.848692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmzj\" (UniqueName: \"kubernetes.io/projected/99ce92c4-34cd-4599-9614-10e7663bd9e7-kube-api-access-lfmzj\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:18.864552 master-1 kubenswrapper[4740]: I1014 13:34:18.857521 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:34:18.864552 master-1 kubenswrapper[4740]: I1014 13:34:18.859092 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:34:18.868341 master-1 kubenswrapper[4740]: I1014 13:34:18.864934 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:34:18.937815 master-1 kubenswrapper[4740]: I1014 13:34:18.936606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctdb\" (UniqueName: \"kubernetes.io/projected/5753ddc2-c44f-411a-a53a-ad0d1a38efed-kube-api-access-2ctdb\") pod \"kube-state-metrics-0\" (UID: \"5753ddc2-c44f-411a-a53a-ad0d1a38efed\") " pod="openstack/kube-state-metrics-0" Oct 14 13:34:19.081200 master-1 kubenswrapper[4740]: I1014 13:34:19.081035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctdb\" (UniqueName: \"kubernetes.io/projected/5753ddc2-c44f-411a-a53a-ad0d1a38efed-kube-api-access-2ctdb\") pod \"kube-state-metrics-0\" (UID: \"5753ddc2-c44f-411a-a53a-ad0d1a38efed\") " pod="openstack/kube-state-metrics-0" Oct 14 13:34:19.275393 master-1 kubenswrapper[4740]: I1014 13:34:19.275330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctdb\" (UniqueName: \"kubernetes.io/projected/5753ddc2-c44f-411a-a53a-ad0d1a38efed-kube-api-access-2ctdb\") pod \"kube-state-metrics-0\" (UID: \"5753ddc2-c44f-411a-a53a-ad0d1a38efed\") " pod="openstack/kube-state-metrics-0" Oct 14 13:34:19.281318 master-1 kubenswrapper[4740]: I1014 13:34:19.281256 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:34:20.459655 master-1 kubenswrapper[4740]: I1014 13:34:20.459585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-61f9432c-d35b-4f67-b699-40e9b9d1fd62\" (UniqueName: \"kubernetes.io/csi/topolvm.io^af3fe034-5fa8-46c3-807f-85b672dd5e4a\") pod \"rabbitmq-server-2\" (UID: \"99ce92c4-34cd-4599-9614-10e7663bd9e7\") " pod="openstack/rabbitmq-server-2" Oct 14 13:34:20.913441 master-1 kubenswrapper[4740]: I1014 13:34:20.913286 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Oct 14 13:34:21.536815 master-1 kubenswrapper[4740]: I1014 13:34:21.536764 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 14 13:34:21.538556 master-1 kubenswrapper[4740]: I1014 13:34:21.538520 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.541169 master-1 kubenswrapper[4740]: I1014 13:34:21.541133 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Oct 14 13:34:21.541892 master-1 kubenswrapper[4740]: I1014 13:34:21.541864 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Oct 14 13:34:21.542135 master-1 kubenswrapper[4740]: I1014 13:34:21.542119 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Oct 14 13:34:21.644348 master-1 kubenswrapper[4740]: I1014 13:34:21.641759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f3f7dab7-f98a-4577-846d-8ffce7cab78a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.644348 master-1 kubenswrapper[4740]: I1014 13:34:21.641814 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/f3f7dab7-f98a-4577-846d-8ffce7cab78a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.644348 master-1 kubenswrapper[4740]: I1014 13:34:21.641864 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f3f7dab7-f98a-4577-846d-8ffce7cab78a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.644348 master-1 kubenswrapper[4740]: I1014 13:34:21.641889 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f3f7dab7-f98a-4577-846d-8ffce7cab78a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.644348 master-1 kubenswrapper[4740]: I1014 13:34:21.641942 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6kng\" (UniqueName: \"kubernetes.io/projected/f3f7dab7-f98a-4577-846d-8ffce7cab78a-kube-api-access-p6kng\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.644348 master-1 kubenswrapper[4740]: I1014 13:34:21.641982 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f3f7dab7-f98a-4577-846d-8ffce7cab78a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.718338 master-1 kubenswrapper[4740]: I1014 13:34:21.718265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 14 13:34:21.743668 master-1 kubenswrapper[4740]: I1014 13:34:21.743595 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f3f7dab7-f98a-4577-846d-8ffce7cab78a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.743668 master-1 kubenswrapper[4740]: I1014 13:34:21.743663 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f3f7dab7-f98a-4577-846d-8ffce7cab78a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.743936 master-1 kubenswrapper[4740]: I1014 13:34:21.743724 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6kng\" (UniqueName: \"kubernetes.io/projected/f3f7dab7-f98a-4577-846d-8ffce7cab78a-kube-api-access-p6kng\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.743936 master-1 kubenswrapper[4740]: I1014 13:34:21.743777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f3f7dab7-f98a-4577-846d-8ffce7cab78a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.743936 master-1 kubenswrapper[4740]: I1014 13:34:21.743818 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f3f7dab7-f98a-4577-846d-8ffce7cab78a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.743936 master-1 kubenswrapper[4740]: I1014 13:34:21.743842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/f3f7dab7-f98a-4577-846d-8ffce7cab78a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.744600 master-1 kubenswrapper[4740]: I1014 13:34:21.744575 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/f3f7dab7-f98a-4577-846d-8ffce7cab78a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.748396 master-1 kubenswrapper[4740]: I1014 13:34:21.748345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f3f7dab7-f98a-4577-846d-8ffce7cab78a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.748772 master-1 kubenswrapper[4740]: I1014 13:34:21.748727 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f3f7dab7-f98a-4577-846d-8ffce7cab78a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.749751 master-1 kubenswrapper[4740]: I1014 13:34:21.749705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f3f7dab7-f98a-4577-846d-8ffce7cab78a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.750790 master-1 kubenswrapper[4740]: I1014 13:34:21.750741 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f3f7dab7-f98a-4577-846d-8ffce7cab78a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.764341 master-1 kubenswrapper[4740]: I1014 13:34:21.764305 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6kng\" (UniqueName: \"kubernetes.io/projected/f3f7dab7-f98a-4577-846d-8ffce7cab78a-kube-api-access-p6kng\") pod \"alertmanager-metric-storage-0\" (UID: \"f3f7dab7-f98a-4577-846d-8ffce7cab78a\") " pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:21.874032 master-1 kubenswrapper[4740]: I1014 13:34:21.873814 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:23.774938 master-1 kubenswrapper[4740]: I1014 13:34:23.774893 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:34:23.843558 master-1 kubenswrapper[4740]: W1014 13:34:23.843483 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3f7dab7_f98a_4577_846d_8ffce7cab78a.slice/crio-fb2e47bc90345e9d8e344a094e8f673c0e0afd855f680bebe3e63a22c9910d30 WatchSource:0}: Error finding container fb2e47bc90345e9d8e344a094e8f673c0e0afd855f680bebe3e63a22c9910d30: Status 404 returned error can't find the container with id fb2e47bc90345e9d8e344a094e8f673c0e0afd855f680bebe3e63a22c9910d30 Oct 14 13:34:24.166996 master-1 kubenswrapper[4740]: I1014 13:34:24.166773 4740 generic.go:334] "Generic (PLEG): container finished" podID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerID="cb15e6658f33630641614c47e29b1f962d807bfe907e52320f7cebfdeef74662" exitCode=0 Oct 14 13:34:24.167787 master-1 kubenswrapper[4740]: I1014 13:34:24.167518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" event={"ID":"b2247769-a88f-4909-98d1-2cb5b442c9de","Type":"ContainerDied","Data":"cb15e6658f33630641614c47e29b1f962d807bfe907e52320f7cebfdeef74662"} Oct 14 13:34:24.170661 master-1 kubenswrapper[4740]: I1014 13:34:24.170601 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"f3f7dab7-f98a-4577-846d-8ffce7cab78a","Type":"ContainerStarted","Data":"fb2e47bc90345e9d8e344a094e8f673c0e0afd855f680bebe3e63a22c9910d30"} Oct 14 13:34:24.172999 master-1 kubenswrapper[4740]: I1014 13:34:24.172858 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5753ddc2-c44f-411a-a53a-ad0d1a38efed","Type":"ContainerStarted","Data":"19bd023cc3e7d3f13382a1c6ff72c76d83c7272da7550367e00f0cf0cf6edf69"} Oct 14 13:34:24.178491 master-1 kubenswrapper[4740]: I1014 13:34:24.178400 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:34:24.179906 master-1 kubenswrapper[4740]: I1014 13:34:24.179553 4740 generic.go:334] "Generic (PLEG): container finished" podID="e3f80a58-e3b0-424d-b54f-a32ccd85555f" containerID="2cfe77a33ade91b661f927b817ccc3fcb429048cdddc752e81cf63b3ee5aab0a" exitCode=0 Oct 14 13:34:24.179906 master-1 kubenswrapper[4740]: I1014 13:34:24.179600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" event={"ID":"e3f80a58-e3b0-424d-b54f-a32ccd85555f","Type":"ContainerDied","Data":"2cfe77a33ade91b661f927b817ccc3fcb429048cdddc752e81cf63b3ee5aab0a"} Oct 14 13:34:24.186205 master-1 kubenswrapper[4740]: I1014 13:34:24.185716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Oct 14 13:34:24.878993 master-1 kubenswrapper[4740]: I1014 13:34:24.878899 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Oct 14 13:34:25.077177 master-1 kubenswrapper[4740]: I1014 13:34:25.077110 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:25.192459 master-1 kubenswrapper[4740]: I1014 13:34:25.192397 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" event={"ID":"e3f80a58-e3b0-424d-b54f-a32ccd85555f","Type":"ContainerDied","Data":"5e84518724731b5ddbb4f9320c96bdf3833d04d79cb550ee05c3effb2797afcd"} Oct 14 13:34:25.192459 master-1 kubenswrapper[4740]: I1014 13:34:25.192455 4740 scope.go:117] "RemoveContainer" containerID="2cfe77a33ade91b661f927b817ccc3fcb429048cdddc752e81cf63b3ee5aab0a" Oct 14 13:34:25.192769 master-1 kubenswrapper[4740]: I1014 13:34:25.192472 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd48d54dc-xbxqd" Oct 14 13:34:25.194599 master-1 kubenswrapper[4740]: I1014 13:34:25.194546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99ce92c4-34cd-4599-9614-10e7663bd9e7","Type":"ContainerStarted","Data":"49e04d94b7a9c2a1a2753c51c130e0d120e3b293c324d791ca0acbc006364c68"} Oct 14 13:34:25.197926 master-1 kubenswrapper[4740]: I1014 13:34:25.197886 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" event={"ID":"b2247769-a88f-4909-98d1-2cb5b442c9de","Type":"ContainerStarted","Data":"a447d8cc4e2a7e8ade746b9ea249b20641c4fc29b36f3d6cd5430d60baa9ad7b"} Oct 14 13:34:25.198707 master-1 kubenswrapper[4740]: I1014 13:34:25.198420 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:25.217358 master-1 kubenswrapper[4740]: I1014 13:34:25.217218 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m6zc\" (UniqueName: \"kubernetes.io/projected/e3f80a58-e3b0-424d-b54f-a32ccd85555f-kube-api-access-8m6zc\") pod \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " Oct 14 13:34:25.217691 master-1 kubenswrapper[4740]: I1014 13:34:25.217642 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f80a58-e3b0-424d-b54f-a32ccd85555f-config\") pod \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\" (UID: \"e3f80a58-e3b0-424d-b54f-a32ccd85555f\") " Oct 14 13:34:25.226244 master-1 kubenswrapper[4740]: I1014 13:34:25.226143 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f80a58-e3b0-424d-b54f-a32ccd85555f-kube-api-access-8m6zc" (OuterVolumeSpecName: "kube-api-access-8m6zc") pod "e3f80a58-e3b0-424d-b54f-a32ccd85555f" (UID: "e3f80a58-e3b0-424d-b54f-a32ccd85555f"). InnerVolumeSpecName "kube-api-access-8m6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:25.236553 master-1 kubenswrapper[4740]: I1014 13:34:25.236462 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f80a58-e3b0-424d-b54f-a32ccd85555f-config" (OuterVolumeSpecName: "config") pod "e3f80a58-e3b0-424d-b54f-a32ccd85555f" (UID: "e3f80a58-e3b0-424d-b54f-a32ccd85555f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:34:25.319436 master-1 kubenswrapper[4740]: I1014 13:34:25.319356 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m6zc\" (UniqueName: \"kubernetes.io/projected/e3f80a58-e3b0-424d-b54f-a32ccd85555f-kube-api-access-8m6zc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:34:25.319436 master-1 kubenswrapper[4740]: I1014 13:34:25.319419 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f80a58-e3b0-424d-b54f-a32ccd85555f-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:34:25.904493 master-1 kubenswrapper[4740]: I1014 13:34:25.904442 4740 scope.go:117] "RemoveContainer" containerID="97abe0d8c7e85255ddcf3f08db5d8fadc02560d6e693cb64ea478661abddbf69" Oct 14 13:34:27.217427 master-1 kubenswrapper[4740]: I1014 13:34:27.217368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5753ddc2-c44f-411a-a53a-ad0d1a38efed","Type":"ContainerStarted","Data":"f09c3093152b6c7dc691029f6591431e0adb91c6b489af94a85d72caa465eccb"} Oct 14 13:34:27.217882 master-1 kubenswrapper[4740]: I1014 13:34:27.217532 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 13:34:27.891876 master-1 kubenswrapper[4740]: I1014 13:34:27.891791 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd48d54dc-xbxqd"] Oct 14 13:34:27.891876 master-1 kubenswrapper[4740]: I1014 13:34:27.891877 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bd48d54dc-xbxqd"] Oct 14 13:34:28.952710 master-1 kubenswrapper[4740]: I1014 13:34:28.952623 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f80a58-e3b0-424d-b54f-a32ccd85555f" path="/var/lib/kubelet/pods/e3f80a58-e3b0-424d-b54f-a32ccd85555f/volumes" Oct 14 13:34:28.967963 master-1 kubenswrapper[4740]: I1014 13:34:28.966045 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" podStartSLOduration=7.248110808 podStartE2EDuration="17.96602376s" podCreationTimestamp="2025-10-14 13:34:11 +0000 UTC" firstStartedPulling="2025-10-14 13:34:12.57052071 +0000 UTC m=+1678.380810039" lastFinishedPulling="2025-10-14 13:34:23.288433662 +0000 UTC m=+1689.098722991" observedRunningTime="2025-10-14 13:34:28.895402984 +0000 UTC m=+1694.705692333" watchObservedRunningTime="2025-10-14 13:34:28.96602376 +0000 UTC m=+1694.776313089" Oct 14 13:34:28.974783 master-1 kubenswrapper[4740]: I1014 13:34:28.974673 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.447479475 podStartE2EDuration="10.974648258s" podCreationTimestamp="2025-10-14 13:34:18 +0000 UTC" firstStartedPulling="2025-10-14 13:34:23.774789505 +0000 UTC m=+1689.585078874" lastFinishedPulling="2025-10-14 13:34:26.301958288 +0000 UTC m=+1692.112247657" observedRunningTime="2025-10-14 13:34:28.924065541 +0000 UTC m=+1694.734354860" watchObservedRunningTime="2025-10-14 13:34:28.974648258 +0000 UTC m=+1694.784937587" Oct 14 13:34:29.233630 master-1 kubenswrapper[4740]: I1014 13:34:29.233475 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"f3f7dab7-f98a-4577-846d-8ffce7cab78a","Type":"ContainerStarted","Data":"95093bc136e8aaf8fd0e2d1d54c8c2d569c479c27be37877d8afd7f743b9188b"} Oct 14 13:34:29.759608 master-1 kubenswrapper[4740]: I1014 13:34:29.759502 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wpjmb"] Oct 14 13:34:29.759992 master-1 kubenswrapper[4740]: E1014 13:34:29.759908 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f80a58-e3b0-424d-b54f-a32ccd85555f" containerName="init" Oct 14 13:34:29.759992 master-1 kubenswrapper[4740]: I1014 13:34:29.759923 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f80a58-e3b0-424d-b54f-a32ccd85555f" containerName="init" Oct 14 13:34:29.760096 master-1 kubenswrapper[4740]: I1014 13:34:29.760086 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f80a58-e3b0-424d-b54f-a32ccd85555f" containerName="init" Oct 14 13:34:29.761587 master-1 kubenswrapper[4740]: I1014 13:34:29.761548 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:29.765662 master-1 kubenswrapper[4740]: I1014 13:34:29.765616 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 14 13:34:29.770414 master-1 kubenswrapper[4740]: I1014 13:34:29.770364 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-26cmc"] Oct 14 13:34:29.772188 master-1 kubenswrapper[4740]: I1014 13:34:29.771738 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.773933 master-1 kubenswrapper[4740]: I1014 13:34:29.773878 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 14 13:34:29.782156 master-1 kubenswrapper[4740]: I1014 13:34:29.782123 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 14 13:34:29.787896 master-1 kubenswrapper[4740]: I1014 13:34:29.787850 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wpjmb"] Oct 14 13:34:29.795122 master-1 kubenswrapper[4740]: I1014 13:34:29.795064 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-26cmc"] Oct 14 13:34:29.906004 master-1 kubenswrapper[4740]: I1014 13:34:29.905119 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-log-ovn\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.906270 master-1 kubenswrapper[4740]: I1014 13:34:29.906145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2hx\" (UniqueName: \"kubernetes.io/projected/a8c155bd-baa3-49a7-bada-ec4d01119872-kube-api-access-lx2hx\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.906270 master-1 kubenswrapper[4740]: I1014 13:34:29.906204 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-log\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:29.906384 master-1 kubenswrapper[4740]: I1014 13:34:29.906272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-run\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.906434 master-1 kubenswrapper[4740]: I1014 13:34:29.906378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-run\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:29.906484 master-1 kubenswrapper[4740]: I1014 13:34:29.906431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8c155bd-baa3-49a7-bada-ec4d01119872-scripts\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.906548 master-1 kubenswrapper[4740]: I1014 13:34:29.906507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8n8p\" (UniqueName: \"kubernetes.io/projected/1da977c7-6e59-4af8-bf2e-644213531487-kube-api-access-j8n8p\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:29.906687 master-1 kubenswrapper[4740]: I1014 13:34:29.906645 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c155bd-baa3-49a7-bada-ec4d01119872-ovn-controller-tls-certs\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.906840 master-1 kubenswrapper[4740]: I1014 13:34:29.906782 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c155bd-baa3-49a7-bada-ec4d01119872-combined-ca-bundle\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.909476 master-1 kubenswrapper[4740]: I1014 13:34:29.909409 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-lib\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:29.909563 master-1 kubenswrapper[4740]: I1014 13:34:29.909479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da977c7-6e59-4af8-bf2e-644213531487-scripts\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:29.909563 master-1 kubenswrapper[4740]: I1014 13:34:29.909530 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-run-ovn\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:29.909563 master-1 kubenswrapper[4740]: I1014 13:34:29.909559 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-etc-ovs\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011309 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-log-ovn\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2hx\" (UniqueName: \"kubernetes.io/projected/a8c155bd-baa3-49a7-bada-ec4d01119872-kube-api-access-lx2hx\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-log\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011406 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-run\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011430 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-run\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011446 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8c155bd-baa3-49a7-bada-ec4d01119872-scripts\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.011466 master-1 kubenswrapper[4740]: I1014 13:34:30.011464 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8n8p\" (UniqueName: \"kubernetes.io/projected/1da977c7-6e59-4af8-bf2e-644213531487-kube-api-access-j8n8p\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.011496 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c155bd-baa3-49a7-bada-ec4d01119872-ovn-controller-tls-certs\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.011524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c155bd-baa3-49a7-bada-ec4d01119872-combined-ca-bundle\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.011547 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-lib\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.011561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da977c7-6e59-4af8-bf2e-644213531487-scripts\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.011578 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-run-ovn\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.011600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-etc-ovs\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.012146 master-1 kubenswrapper[4740]: I1014 13:34:30.012061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-etc-ovs\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.012381 master-1 kubenswrapper[4740]: I1014 13:34:30.012323 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-log-ovn\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.012682 master-1 kubenswrapper[4740]: I1014 13:34:30.012651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-log\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.012791 master-1 kubenswrapper[4740]: I1014 13:34:30.012760 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-run\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.012850 master-1 kubenswrapper[4740]: I1014 13:34:30.012803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-run\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.013581 master-1 kubenswrapper[4740]: I1014 13:34:30.013548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8c155bd-baa3-49a7-bada-ec4d01119872-var-run-ovn\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.013710 master-1 kubenswrapper[4740]: I1014 13:34:30.013663 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1da977c7-6e59-4af8-bf2e-644213531487-var-lib\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.015037 master-1 kubenswrapper[4740]: I1014 13:34:30.014989 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8c155bd-baa3-49a7-bada-ec4d01119872-scripts\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.015771 master-1 kubenswrapper[4740]: I1014 13:34:30.015706 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da977c7-6e59-4af8-bf2e-644213531487-scripts\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.016942 master-1 kubenswrapper[4740]: I1014 13:34:30.016902 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c155bd-baa3-49a7-bada-ec4d01119872-combined-ca-bundle\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.019878 master-1 kubenswrapper[4740]: I1014 13:34:30.019808 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c155bd-baa3-49a7-bada-ec4d01119872-ovn-controller-tls-certs\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.177079 master-1 kubenswrapper[4740]: I1014 13:34:30.177007 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2hx\" (UniqueName: \"kubernetes.io/projected/a8c155bd-baa3-49a7-bada-ec4d01119872-kube-api-access-lx2hx\") pod \"ovn-controller-26cmc\" (UID: \"a8c155bd-baa3-49a7-bada-ec4d01119872\") " pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.178691 master-1 kubenswrapper[4740]: I1014 13:34:30.178507 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8n8p\" (UniqueName: \"kubernetes.io/projected/1da977c7-6e59-4af8-bf2e-644213531487-kube-api-access-j8n8p\") pod \"ovn-controller-ovs-wpjmb\" (UID: \"1da977c7-6e59-4af8-bf2e-644213531487\") " pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.243457 master-1 kubenswrapper[4740]: I1014 13:34:30.243379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99ce92c4-34cd-4599-9614-10e7663bd9e7","Type":"ContainerStarted","Data":"806c6e8d87e1e46b164716290b20250688aecdf887e4ef88d1340fa437ddd895"} Oct 14 13:34:30.380088 master-1 kubenswrapper[4740]: I1014 13:34:30.379962 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:30.387212 master-1 kubenswrapper[4740]: I1014 13:34:30.387150 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-26cmc" Oct 14 13:34:30.959739 master-1 kubenswrapper[4740]: I1014 13:34:30.959659 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-26cmc"] Oct 14 13:34:31.251291 master-1 kubenswrapper[4740]: I1014 13:34:31.251164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-26cmc" event={"ID":"a8c155bd-baa3-49a7-bada-ec4d01119872","Type":"ContainerStarted","Data":"d2f97322caf36332cc017536a12b67af399f9a45583e19dfb0814c9cdba51d25"} Oct 14 13:34:32.133185 master-1 kubenswrapper[4740]: I1014 13:34:32.133133 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:34:32.525325 master-1 kubenswrapper[4740]: I1014 13:34:32.525132 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-z9vwp"] Oct 14 13:34:32.526491 master-1 kubenswrapper[4740]: I1014 13:34:32.526445 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.534329 master-1 kubenswrapper[4740]: I1014 13:34:32.534258 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 14 13:34:32.534522 master-1 kubenswrapper[4740]: I1014 13:34:32.534385 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 14 13:34:32.557725 master-1 kubenswrapper[4740]: I1014 13:34:32.557668 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e338af78-f165-448a-b83f-83a570b4c9dc-ovn-rundir\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.557899 master-1 kubenswrapper[4740]: I1014 13:34:32.557751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e338af78-f165-448a-b83f-83a570b4c9dc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.557899 master-1 kubenswrapper[4740]: I1014 13:34:32.557822 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27bv\" (UniqueName: \"kubernetes.io/projected/e338af78-f165-448a-b83f-83a570b4c9dc-kube-api-access-g27bv\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.558005 master-1 kubenswrapper[4740]: I1014 13:34:32.557897 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338af78-f165-448a-b83f-83a570b4c9dc-config\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.558005 master-1 kubenswrapper[4740]: I1014 13:34:32.557941 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e338af78-f165-448a-b83f-83a570b4c9dc-combined-ca-bundle\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.558005 master-1 kubenswrapper[4740]: I1014 13:34:32.557991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e338af78-f165-448a-b83f-83a570b4c9dc-ovs-rundir\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.602113 master-1 kubenswrapper[4740]: I1014 13:34:32.602034 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-z9vwp"] Oct 14 13:34:32.662957 master-1 kubenswrapper[4740]: I1014 13:34:32.662899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e338af78-f165-448a-b83f-83a570b4c9dc-ovn-rundir\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663205 master-1 kubenswrapper[4740]: I1014 13:34:32.662964 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e338af78-f165-448a-b83f-83a570b4c9dc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663205 master-1 kubenswrapper[4740]: I1014 13:34:32.663018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g27bv\" (UniqueName: \"kubernetes.io/projected/e338af78-f165-448a-b83f-83a570b4c9dc-kube-api-access-g27bv\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663205 master-1 kubenswrapper[4740]: I1014 13:34:32.663076 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338af78-f165-448a-b83f-83a570b4c9dc-config\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663205 master-1 kubenswrapper[4740]: I1014 13:34:32.663111 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e338af78-f165-448a-b83f-83a570b4c9dc-combined-ca-bundle\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663205 master-1 kubenswrapper[4740]: I1014 13:34:32.663144 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e338af78-f165-448a-b83f-83a570b4c9dc-ovs-rundir\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663808 master-1 kubenswrapper[4740]: I1014 13:34:32.663779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e338af78-f165-448a-b83f-83a570b4c9dc-ovs-rundir\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.663889 master-1 kubenswrapper[4740]: I1014 13:34:32.663867 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e338af78-f165-448a-b83f-83a570b4c9dc-ovn-rundir\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.665299 master-1 kubenswrapper[4740]: I1014 13:34:32.665212 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e338af78-f165-448a-b83f-83a570b4c9dc-config\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.667739 master-1 kubenswrapper[4740]: I1014 13:34:32.667683 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e338af78-f165-448a-b83f-83a570b4c9dc-combined-ca-bundle\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.668380 master-1 kubenswrapper[4740]: I1014 13:34:32.668321 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e338af78-f165-448a-b83f-83a570b4c9dc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.723337 master-1 kubenswrapper[4740]: I1014 13:34:32.719077 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g27bv\" (UniqueName: \"kubernetes.io/projected/e338af78-f165-448a-b83f-83a570b4c9dc-kube-api-access-g27bv\") pod \"ovn-controller-metrics-z9vwp\" (UID: \"e338af78-f165-448a-b83f-83a570b4c9dc\") " pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:32.846671 master-1 kubenswrapper[4740]: I1014 13:34:32.846604 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-z9vwp" Oct 14 13:34:34.490170 master-1 kubenswrapper[4740]: I1014 13:34:34.489936 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wpjmb"] Oct 14 13:34:34.491894 master-1 kubenswrapper[4740]: I1014 13:34:34.491861 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-z9vwp"] Oct 14 13:34:34.507068 master-1 kubenswrapper[4740]: W1014 13:34:34.507018 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da977c7_6e59_4af8_bf2e_644213531487.slice/crio-725e94cb5c6b0eae56a5ea7d35c18a80af2f3945adf39c9da814745906fa62e5 WatchSource:0}: Error finding container 725e94cb5c6b0eae56a5ea7d35c18a80af2f3945adf39c9da814745906fa62e5: Status 404 returned error can't find the container with id 725e94cb5c6b0eae56a5ea7d35c18a80af2f3945adf39c9da814745906fa62e5 Oct 14 13:34:35.283279 master-1 kubenswrapper[4740]: I1014 13:34:35.283207 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wpjmb" event={"ID":"1da977c7-6e59-4af8-bf2e-644213531487","Type":"ContainerStarted","Data":"725e94cb5c6b0eae56a5ea7d35c18a80af2f3945adf39c9da814745906fa62e5"} Oct 14 13:34:35.289971 master-1 kubenswrapper[4740]: I1014 13:34:35.289909 4740 generic.go:334] "Generic (PLEG): container finished" podID="f3f7dab7-f98a-4577-846d-8ffce7cab78a" containerID="95093bc136e8aaf8fd0e2d1d54c8c2d569c479c27be37877d8afd7f743b9188b" exitCode=0 Oct 14 13:34:35.290062 master-1 kubenswrapper[4740]: I1014 13:34:35.290037 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"f3f7dab7-f98a-4577-846d-8ffce7cab78a","Type":"ContainerDied","Data":"95093bc136e8aaf8fd0e2d1d54c8c2d569c479c27be37877d8afd7f743b9188b"} Oct 14 13:34:35.296319 master-1 kubenswrapper[4740]: I1014 13:34:35.296152 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-26cmc" event={"ID":"a8c155bd-baa3-49a7-bada-ec4d01119872","Type":"ContainerStarted","Data":"59198a22831e5bdccd0eb6f34858e6f293f034440d0fcb9fbf68c7700da43743"} Oct 14 13:34:35.297054 master-1 kubenswrapper[4740]: I1014 13:34:35.297001 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-26cmc" Oct 14 13:34:35.300624 master-1 kubenswrapper[4740]: I1014 13:34:35.300559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-z9vwp" event={"ID":"e338af78-f165-448a-b83f-83a570b4c9dc","Type":"ContainerStarted","Data":"92a3b7cee7ce6c10fa1b570a7c3d236fe8c07be96b060302a7601493e9d18bdd"} Oct 14 13:34:35.404199 master-1 kubenswrapper[4740]: I1014 13:34:35.404077 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-26cmc" podStartSLOduration=3.261018186 podStartE2EDuration="6.404056145s" podCreationTimestamp="2025-10-14 13:34:29 +0000 UTC" firstStartedPulling="2025-10-14 13:34:30.964318348 +0000 UTC m=+1696.774607717" lastFinishedPulling="2025-10-14 13:34:34.107356347 +0000 UTC m=+1699.917645676" observedRunningTime="2025-10-14 13:34:35.403216423 +0000 UTC m=+1701.213505752" watchObservedRunningTime="2025-10-14 13:34:35.404056145 +0000 UTC m=+1701.214345474" Oct 14 13:34:36.053134 master-1 kubenswrapper[4740]: I1014 13:34:36.052927 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 14 13:34:36.068953 master-1 kubenswrapper[4740]: I1014 13:34:36.068884 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.074196 master-1 kubenswrapper[4740]: I1014 13:34:36.073599 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 13:34:36.074196 master-1 kubenswrapper[4740]: I1014 13:34:36.073878 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 13:34:36.074196 master-1 kubenswrapper[4740]: I1014 13:34:36.074091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 13:34:36.074196 master-1 kubenswrapper[4740]: I1014 13:34:36.073916 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 13:34:36.075584 master-1 kubenswrapper[4740]: I1014 13:34:36.074811 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 13:34:36.075584 master-1 kubenswrapper[4740]: I1014 13:34:36.075052 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 13:34:36.081902 master-1 kubenswrapper[4740]: I1014 13:34:36.081836 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 14 13:34:36.221379 master-1 kubenswrapper[4740]: I1014 13:34:36.221324 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65ca0967-6297-4934-a52c-2427b0c87fa2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2fa4920d-85c1-4fd4-991a-3af33e188702\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct2zk\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-kube-api-access-ct2zk\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221536 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22432753-e8c4-45ce-8e09-f9d497dc8c8b-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221571 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22432753-e8c4-45ce-8e09-f9d497dc8c8b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221597 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.221708 master-1 kubenswrapper[4740]: I1014 13:34:36.221646 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.222204 master-1 kubenswrapper[4740]: I1014 13:34:36.221843 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.222204 master-1 kubenswrapper[4740]: I1014 13:34:36.221900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.313442 master-1 kubenswrapper[4740]: I1014 13:34:36.313342 4740 generic.go:334] "Generic (PLEG): container finished" podID="1da977c7-6e59-4af8-bf2e-644213531487" containerID="1f0642c86beeb7a3105c1606a4de1147d395710ae74574b43e0c294ad10d2fdd" exitCode=0 Oct 14 13:34:36.313693 master-1 kubenswrapper[4740]: I1014 13:34:36.313408 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wpjmb" event={"ID":"1da977c7-6e59-4af8-bf2e-644213531487","Type":"ContainerDied","Data":"1f0642c86beeb7a3105c1606a4de1147d395710ae74574b43e0c294ad10d2fdd"} Oct 14 13:34:36.323507 master-1 kubenswrapper[4740]: I1014 13:34:36.323482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct2zk\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-kube-api-access-ct2zk\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323582 master-1 kubenswrapper[4740]: I1014 13:34:36.323521 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323582 master-1 kubenswrapper[4740]: I1014 13:34:36.323544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22432753-e8c4-45ce-8e09-f9d497dc8c8b-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323582 master-1 kubenswrapper[4740]: I1014 13:34:36.323570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22432753-e8c4-45ce-8e09-f9d497dc8c8b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323744 master-1 kubenswrapper[4740]: I1014 13:34:36.323589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323744 master-1 kubenswrapper[4740]: I1014 13:34:36.323605 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323744 master-1 kubenswrapper[4740]: I1014 13:34:36.323630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323744 master-1 kubenswrapper[4740]: I1014 13:34:36.323668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323744 master-1 kubenswrapper[4740]: I1014 13:34:36.323686 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.323744 master-1 kubenswrapper[4740]: I1014 13:34:36.323741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.325260 master-1 kubenswrapper[4740]: I1014 13:34:36.325185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.325260 master-1 kubenswrapper[4740]: I1014 13:34:36.325248 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-config-data\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.325423 master-1 kubenswrapper[4740]: I1014 13:34:36.325295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-plugins-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.325589 master-1 kubenswrapper[4740]: I1014 13:34:36.325546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.325648 master-1 kubenswrapper[4740]: I1014 13:34:36.325613 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22432753-e8c4-45ce-8e09-f9d497dc8c8b-server-conf\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.327334 master-1 kubenswrapper[4740]: I1014 13:34:36.327316 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.327440 master-1 kubenswrapper[4740]: I1014 13:34:36.327388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22432753-e8c4-45ce-8e09-f9d497dc8c8b-pod-info\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.327864 master-1 kubenswrapper[4740]: I1014 13:34:36.327834 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.328270 master-1 kubenswrapper[4740]: I1014 13:34:36.328201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22432753-e8c4-45ce-8e09-f9d497dc8c8b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.425110 master-1 kubenswrapper[4740]: I1014 13:34:36.425035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65ca0967-6297-4934-a52c-2427b0c87fa2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2fa4920d-85c1-4fd4-991a-3af33e188702\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.425680 master-1 kubenswrapper[4740]: I1014 13:34:36.425641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct2zk\" (UniqueName: \"kubernetes.io/projected/22432753-e8c4-45ce-8e09-f9d497dc8c8b-kube-api-access-ct2zk\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:36.428814 master-1 kubenswrapper[4740]: I1014 13:34:36.428779 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:34:36.428924 master-1 kubenswrapper[4740]: I1014 13:34:36.428822 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65ca0967-6297-4934-a52c-2427b0c87fa2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2fa4920d-85c1-4fd4-991a-3af33e188702\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ce67b5b0a6cd598ba4b801170737ae35eda5e5b6264615c6562bba6752f2dfd1/globalmount\"" pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:37.181049 master-1 kubenswrapper[4740]: I1014 13:34:37.180965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-1"] Oct 14 13:34:37.184813 master-1 kubenswrapper[4740]: I1014 13:34:37.184383 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-1" Oct 14 13:34:37.189613 master-1 kubenswrapper[4740]: I1014 13:34:37.189536 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 14 13:34:37.190280 master-1 kubenswrapper[4740]: I1014 13:34:37.190205 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 14 13:34:37.199310 master-1 kubenswrapper[4740]: I1014 13:34:37.199248 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-1"] Oct 14 13:34:37.282098 master-1 kubenswrapper[4740]: I1014 13:34:37.281812 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-config-data\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.282098 master-1 kubenswrapper[4740]: I1014 13:34:37.281894 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2n9p\" (UniqueName: \"kubernetes.io/projected/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-kube-api-access-j2n9p\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.282098 master-1 kubenswrapper[4740]: I1014 13:34:37.281935 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-memcached-tls-certs\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.282670 master-1 kubenswrapper[4740]: I1014 13:34:37.282575 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-combined-ca-bundle\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.286263 master-1 kubenswrapper[4740]: I1014 13:34:37.283017 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-kolla-config\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.330293 master-1 kubenswrapper[4740]: I1014 13:34:37.330241 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wpjmb" event={"ID":"1da977c7-6e59-4af8-bf2e-644213531487","Type":"ContainerStarted","Data":"8cccb1c433b47bd4979176ee45e3dc452f6575bf4d5da681af3bdd360dc220f2"} Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.385499 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-memcached-tls-certs\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.385672 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-combined-ca-bundle\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.385731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-kolla-config\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.385861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-config-data\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.385886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2n9p\" (UniqueName: \"kubernetes.io/projected/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-kube-api-access-j2n9p\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.388366 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-kolla-config\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.388458 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-config-data\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.392362 master-1 kubenswrapper[4740]: I1014 13:34:37.392103 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-memcached-tls-certs\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.398281 master-1 kubenswrapper[4740]: I1014 13:34:37.395345 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-combined-ca-bundle\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.414066 master-1 kubenswrapper[4740]: I1014 13:34:37.413982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2n9p\" (UniqueName: \"kubernetes.io/projected/b48dfc00-2216-4c54-baa7-bb4d33db5a5a-kube-api-access-j2n9p\") pod \"memcached-1\" (UID: \"b48dfc00-2216-4c54-baa7-bb4d33db5a5a\") " pod="openstack/memcached-1" Oct 14 13:34:37.551672 master-1 kubenswrapper[4740]: I1014 13:34:37.551210 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-1" Oct 14 13:34:37.953331 master-1 kubenswrapper[4740]: I1014 13:34:37.953261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65ca0967-6297-4934-a52c-2427b0c87fa2\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2fa4920d-85c1-4fd4-991a-3af33e188702\") pod \"rabbitmq-cell1-server-2\" (UID: \"22432753-e8c4-45ce-8e09-f9d497dc8c8b\") " pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:38.194281 master-1 kubenswrapper[4740]: I1014 13:34:38.190652 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:34:38.373667 master-1 kubenswrapper[4740]: I1014 13:34:38.373602 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-2"] Oct 14 13:34:38.381549 master-1 kubenswrapper[4740]: I1014 13:34:38.381474 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-2" Oct 14 13:34:38.398670 master-1 kubenswrapper[4740]: I1014 13:34:38.398623 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 14 13:34:38.398814 master-1 kubenswrapper[4740]: I1014 13:34:38.398581 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 14 13:34:38.398814 master-1 kubenswrapper[4740]: I1014 13:34:38.398773 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 14 13:34:38.398904 master-1 kubenswrapper[4740]: I1014 13:34:38.398890 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 14 13:34:38.403631 master-1 kubenswrapper[4740]: I1014 13:34:38.403580 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-2"] Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509778 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz678\" (UniqueName: \"kubernetes.io/projected/802c2dc1-61e5-470b-973f-73f3ec26c14d-kube-api-access-pz678\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509835 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-config-data-default\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509866 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-secrets\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509896 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-kolla-config\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/802c2dc1-61e5-470b-973f-73f3ec26c14d-config-data-generated\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.509975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-operator-scripts\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.510041 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.512007 master-1 kubenswrapper[4740]: I1014 13:34:38.510073 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd47da6f-0233-420d-b635-873440aac4e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da41a5af-4dd0-490c-a476-feae7a5a479b\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.583629 master-1 kubenswrapper[4740]: I1014 13:34:38.583589 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-1"] Oct 14 13:34:38.587144 master-1 kubenswrapper[4740]: W1014 13:34:38.587119 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb48dfc00_2216_4c54_baa7_bb4d33db5a5a.slice/crio-764d260be7ca697956cc7e1943218941e55d9b54b92e58c8f1485b942514bcdf WatchSource:0}: Error finding container 764d260be7ca697956cc7e1943218941e55d9b54b92e58c8f1485b942514bcdf: Status 404 returned error can't find the container with id 764d260be7ca697956cc7e1943218941e55d9b54b92e58c8f1485b942514bcdf Oct 14 13:34:38.611788 master-1 kubenswrapper[4740]: I1014 13:34:38.611742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612016 master-1 kubenswrapper[4740]: I1014 13:34:38.612001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd47da6f-0233-420d-b635-873440aac4e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da41a5af-4dd0-490c-a476-feae7a5a479b\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612151 master-1 kubenswrapper[4740]: I1014 13:34:38.612134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz678\" (UniqueName: \"kubernetes.io/projected/802c2dc1-61e5-470b-973f-73f3ec26c14d-kube-api-access-pz678\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612253 master-1 kubenswrapper[4740]: I1014 13:34:38.612221 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-config-data-default\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612356 master-1 kubenswrapper[4740]: I1014 13:34:38.612341 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-secrets\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612434 master-1 kubenswrapper[4740]: I1014 13:34:38.612422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-kolla-config\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612518 master-1 kubenswrapper[4740]: I1014 13:34:38.612506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/802c2dc1-61e5-470b-973f-73f3ec26c14d-config-data-generated\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612591 master-1 kubenswrapper[4740]: I1014 13:34:38.612580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.612667 master-1 kubenswrapper[4740]: I1014 13:34:38.612655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-operator-scripts\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.613537 master-1 kubenswrapper[4740]: I1014 13:34:38.613495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-config-data-default\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.613609 master-1 kubenswrapper[4740]: I1014 13:34:38.613553 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/802c2dc1-61e5-470b-973f-73f3ec26c14d-config-data-generated\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.613765 master-1 kubenswrapper[4740]: I1014 13:34:38.613732 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-kolla-config\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.614405 master-1 kubenswrapper[4740]: I1014 13:34:38.614379 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:34:38.614456 master-1 kubenswrapper[4740]: I1014 13:34:38.614412 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd47da6f-0233-420d-b635-873440aac4e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da41a5af-4dd0-490c-a476-feae7a5a479b\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/cbe7a3eb2aafb2e6f94b666f4d58fb1ca969e5e50fc3877b34179cbc08f0815f/globalmount\"" pod="openstack/openstack-galera-2" Oct 14 13:34:38.614765 master-1 kubenswrapper[4740]: I1014 13:34:38.614748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/802c2dc1-61e5-470b-973f-73f3ec26c14d-operator-scripts\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.619084 master-1 kubenswrapper[4740]: I1014 13:34:38.616821 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-galera-tls-certs\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.622880 master-1 kubenswrapper[4740]: I1014 13:34:38.622353 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-secrets\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.625527 master-1 kubenswrapper[4740]: I1014 13:34:38.625479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802c2dc1-61e5-470b-973f-73f3ec26c14d-combined-ca-bundle\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.656098 master-1 kubenswrapper[4740]: I1014 13:34:38.655940 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz678\" (UniqueName: \"kubernetes.io/projected/802c2dc1-61e5-470b-973f-73f3ec26c14d-kube-api-access-pz678\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:38.806285 master-1 kubenswrapper[4740]: I1014 13:34:38.799757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-2"] Oct 14 13:34:38.810842 master-1 kubenswrapper[4740]: W1014 13:34:38.810751 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22432753_e8c4_45ce_8e09_f9d497dc8c8b.slice/crio-fb9efca72356b9f10143466c2791381cc324afd371952118e4e936d24adde497 WatchSource:0}: Error finding container fb9efca72356b9f10143466c2791381cc324afd371952118e4e936d24adde497: Status 404 returned error can't find the container with id fb9efca72356b9f10143466c2791381cc324afd371952118e4e936d24adde497 Oct 14 13:34:39.287265 master-1 kubenswrapper[4740]: I1014 13:34:39.287162 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 13:34:39.365613 master-1 kubenswrapper[4740]: I1014 13:34:39.365550 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wpjmb" event={"ID":"1da977c7-6e59-4af8-bf2e-644213531487","Type":"ContainerStarted","Data":"8d291983464b0b9dfd39ba3264632531eb15504140023d9c700a85dc7f56c5a1"} Oct 14 13:34:39.365835 master-1 kubenswrapper[4740]: I1014 13:34:39.365680 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:39.365835 master-1 kubenswrapper[4740]: I1014 13:34:39.365723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:34:39.367012 master-1 kubenswrapper[4740]: I1014 13:34:39.366961 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"22432753-e8c4-45ce-8e09-f9d497dc8c8b","Type":"ContainerStarted","Data":"fb9efca72356b9f10143466c2791381cc324afd371952118e4e936d24adde497"} Oct 14 13:34:39.368603 master-1 kubenswrapper[4740]: I1014 13:34:39.368568 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-z9vwp" event={"ID":"e338af78-f165-448a-b83f-83a570b4c9dc","Type":"ContainerStarted","Data":"703546056eed0ac673a687b4b6e82cf252e0376ba810e2b65f64725df37c4906"} Oct 14 13:34:39.369882 master-1 kubenswrapper[4740]: I1014 13:34:39.369836 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-1" event={"ID":"b48dfc00-2216-4c54-baa7-bb4d33db5a5a","Type":"ContainerStarted","Data":"764d260be7ca697956cc7e1943218941e55d9b54b92e58c8f1485b942514bcdf"} Oct 14 13:34:39.403570 master-1 kubenswrapper[4740]: I1014 13:34:39.403477 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wpjmb" podStartSLOduration=9.871491847 podStartE2EDuration="10.403462716s" podCreationTimestamp="2025-10-14 13:34:29 +0000 UTC" firstStartedPulling="2025-10-14 13:34:34.51613987 +0000 UTC m=+1700.326429199" lastFinishedPulling="2025-10-14 13:34:35.048110739 +0000 UTC m=+1700.858400068" observedRunningTime="2025-10-14 13:34:39.398507474 +0000 UTC m=+1705.208796833" watchObservedRunningTime="2025-10-14 13:34:39.403462716 +0000 UTC m=+1705.213752045" Oct 14 13:34:39.470176 master-1 kubenswrapper[4740]: I1014 13:34:39.470046 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-z9vwp" podStartSLOduration=3.7822675 podStartE2EDuration="7.470002414s" podCreationTimestamp="2025-10-14 13:34:32 +0000 UTC" firstStartedPulling="2025-10-14 13:34:34.493606975 +0000 UTC m=+1700.303896304" lastFinishedPulling="2025-10-14 13:34:38.181341889 +0000 UTC m=+1703.991631218" observedRunningTime="2025-10-14 13:34:39.424003468 +0000 UTC m=+1705.234292817" watchObservedRunningTime="2025-10-14 13:34:39.470002414 +0000 UTC m=+1705.280291753" Oct 14 13:34:39.745915 master-1 kubenswrapper[4740]: I1014 13:34:39.745836 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd47da6f-0233-420d-b635-873440aac4e7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da41a5af-4dd0-490c-a476-feae7a5a479b\") pod \"openstack-galera-2\" (UID: \"802c2dc1-61e5-470b-973f-73f3ec26c14d\") " pod="openstack/openstack-galera-2" Oct 14 13:34:39.961034 master-1 kubenswrapper[4740]: I1014 13:34:39.960980 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-2" Oct 14 13:34:40.387481 master-1 kubenswrapper[4740]: I1014 13:34:40.387411 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"f3f7dab7-f98a-4577-846d-8ffce7cab78a","Type":"ContainerStarted","Data":"57a2898899ebbe6fc29493f9d4d4be494c7e142d2f62220471e472ca03d81f18"} Oct 14 13:34:40.398021 master-1 kubenswrapper[4740]: I1014 13:34:40.397731 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"22432753-e8c4-45ce-8e09-f9d497dc8c8b","Type":"ContainerStarted","Data":"f61c0813208d698459e3a84cadb239c6e5d819652eeebc235b87668c5ee8b28a"} Oct 14 13:34:40.442848 master-1 kubenswrapper[4740]: I1014 13:34:40.442751 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-2"] Oct 14 13:34:41.127081 master-1 kubenswrapper[4740]: W1014 13:34:41.126939 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod802c2dc1_61e5_470b_973f_73f3ec26c14d.slice/crio-4ab46b872316d1363520e0522457c7c41486362f0a45a4a8678378a129fa500f WatchSource:0}: Error finding container 4ab46b872316d1363520e0522457c7c41486362f0a45a4a8678378a129fa500f: Status 404 returned error can't find the container with id 4ab46b872316d1363520e0522457c7c41486362f0a45a4a8678378a129fa500f Oct 14 13:34:41.407806 master-1 kubenswrapper[4740]: I1014 13:34:41.407621 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"802c2dc1-61e5-470b-973f-73f3ec26c14d","Type":"ContainerStarted","Data":"4ab46b872316d1363520e0522457c7c41486362f0a45a4a8678378a129fa500f"} Oct 14 13:34:42.419452 master-1 kubenswrapper[4740]: I1014 13:34:42.419206 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-1" event={"ID":"b48dfc00-2216-4c54-baa7-bb4d33db5a5a","Type":"ContainerStarted","Data":"7084c541bd2530b5e441e1e147713ccdb57ed683bd91443916a21a2038676c8c"} Oct 14 13:34:42.419452 master-1 kubenswrapper[4740]: I1014 13:34:42.419414 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-1" Oct 14 13:34:42.423646 master-1 kubenswrapper[4740]: I1014 13:34:42.423594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"f3f7dab7-f98a-4577-846d-8ffce7cab78a","Type":"ContainerStarted","Data":"bd2744e52d8d8a1a424036b466320c2e8a42ffd4bf1209bffdef355caaebe816"} Oct 14 13:34:42.424585 master-1 kubenswrapper[4740]: I1014 13:34:42.424537 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:42.428027 master-1 kubenswrapper[4740]: I1014 13:34:42.427963 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Oct 14 13:34:42.447490 master-1 kubenswrapper[4740]: I1014 13:34:42.447409 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-1" podStartSLOduration=2.880825961 podStartE2EDuration="5.447384896s" podCreationTimestamp="2025-10-14 13:34:37 +0000 UTC" firstStartedPulling="2025-10-14 13:34:38.589809174 +0000 UTC m=+1704.400098513" lastFinishedPulling="2025-10-14 13:34:41.156368119 +0000 UTC m=+1706.966657448" observedRunningTime="2025-10-14 13:34:42.447032636 +0000 UTC m=+1708.257321975" watchObservedRunningTime="2025-10-14 13:34:42.447384896 +0000 UTC m=+1708.257674225" Oct 14 13:34:42.482573 master-1 kubenswrapper[4740]: I1014 13:34:42.482488 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.834552203 podStartE2EDuration="21.482469623s" podCreationTimestamp="2025-10-14 13:34:21 +0000 UTC" firstStartedPulling="2025-10-14 13:34:23.846366126 +0000 UTC m=+1689.656655455" lastFinishedPulling="2025-10-14 13:34:39.494283536 +0000 UTC m=+1705.304572875" observedRunningTime="2025-10-14 13:34:42.48200566 +0000 UTC m=+1708.292294999" watchObservedRunningTime="2025-10-14 13:34:42.482469623 +0000 UTC m=+1708.292758952" Oct 14 13:34:42.635367 master-1 kubenswrapper[4740]: I1014 13:34:42.635295 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-1"] Oct 14 13:34:42.637510 master-1 kubenswrapper[4740]: I1014 13:34:42.637324 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.640575 master-1 kubenswrapper[4740]: I1014 13:34:42.640536 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 14 13:34:42.640814 master-1 kubenswrapper[4740]: I1014 13:34:42.640789 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 14 13:34:42.641600 master-1 kubenswrapper[4740]: I1014 13:34:42.641490 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 14 13:34:42.680624 master-1 kubenswrapper[4740]: I1014 13:34:42.654104 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-1"] Oct 14 13:34:42.787183 master-1 kubenswrapper[4740]: I1014 13:34:42.787098 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfh2\" (UniqueName: \"kubernetes.io/projected/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-kube-api-access-npfh2\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787183 master-1 kubenswrapper[4740]: I1014 13:34:42.787170 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-kolla-config\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787605 master-1 kubenswrapper[4740]: I1014 13:34:42.787205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-config-data-generated\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787605 master-1 kubenswrapper[4740]: I1014 13:34:42.787243 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-config-data-default\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787605 master-1 kubenswrapper[4740]: I1014 13:34:42.787298 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b08f8a5e-e000-40bd-a1ad-759c5aed06ee\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3a7d0af1-f2b5-4a6e-93fa-d61996144f00\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787605 master-1 kubenswrapper[4740]: I1014 13:34:42.787320 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-operator-scripts\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787881 master-1 kubenswrapper[4740]: I1014 13:34:42.787607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-combined-ca-bundle\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787881 master-1 kubenswrapper[4740]: I1014 13:34:42.787777 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-secrets\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.787881 master-1 kubenswrapper[4740]: I1014 13:34:42.787800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-galera-tls-certs\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890217 master-1 kubenswrapper[4740]: I1014 13:34:42.890083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npfh2\" (UniqueName: \"kubernetes.io/projected/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-kube-api-access-npfh2\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890217 master-1 kubenswrapper[4740]: I1014 13:34:42.890193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-kolla-config\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890644 master-1 kubenswrapper[4740]: I1014 13:34:42.890288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-config-data-generated\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890644 master-1 kubenswrapper[4740]: I1014 13:34:42.890326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-config-data-default\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890644 master-1 kubenswrapper[4740]: I1014 13:34:42.890373 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b08f8a5e-e000-40bd-a1ad-759c5aed06ee\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3a7d0af1-f2b5-4a6e-93fa-d61996144f00\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890644 master-1 kubenswrapper[4740]: I1014 13:34:42.890424 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-operator-scripts\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890644 master-1 kubenswrapper[4740]: I1014 13:34:42.890488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-combined-ca-bundle\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.890644 master-1 kubenswrapper[4740]: I1014 13:34:42.890629 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-secrets\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.891109 master-1 kubenswrapper[4740]: I1014 13:34:42.890669 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-galera-tls-certs\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.891109 master-1 kubenswrapper[4740]: I1014 13:34:42.890914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-config-data-generated\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.891747 master-1 kubenswrapper[4740]: I1014 13:34:42.891693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-config-data-default\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.892022 master-1 kubenswrapper[4740]: I1014 13:34:42.891971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-kolla-config\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.892510 master-1 kubenswrapper[4740]: I1014 13:34:42.892467 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:34:42.892510 master-1 kubenswrapper[4740]: I1014 13:34:42.892505 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b08f8a5e-e000-40bd-a1ad-759c5aed06ee\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3a7d0af1-f2b5-4a6e-93fa-d61996144f00\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/46f031574bd0b4ff2ba11510225900477d82ea1d91f0a11a8e2118cdba3d53e1/globalmount\"" pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.892746 master-1 kubenswrapper[4740]: I1014 13:34:42.892584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-operator-scripts\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.894170 master-1 kubenswrapper[4740]: I1014 13:34:42.893884 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-secrets\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.895292 master-1 kubenswrapper[4740]: I1014 13:34:42.895208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-galera-tls-certs\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.896017 master-1 kubenswrapper[4740]: I1014 13:34:42.895937 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-combined-ca-bundle\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:42.910595 master-1 kubenswrapper[4740]: I1014 13:34:42.910537 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npfh2\" (UniqueName: \"kubernetes.io/projected/d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf-kube-api-access-npfh2\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:44.211241 master-1 kubenswrapper[4740]: I1014 13:34:44.211152 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b08f8a5e-e000-40bd-a1ad-759c5aed06ee\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3a7d0af1-f2b5-4a6e-93fa-d61996144f00\") pod \"openstack-cell1-galera-1\" (UID: \"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf\") " pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:44.484421 master-1 kubenswrapper[4740]: I1014 13:34:44.484265 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:45.456414 master-1 kubenswrapper[4740]: I1014 13:34:45.455979 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"802c2dc1-61e5-470b-973f-73f3ec26c14d","Type":"ContainerStarted","Data":"7825df0b7ee4e7dc5dd182cf6b7623acf6e66f023d8e4b6ea632e0685fcd0b25"} Oct 14 13:34:45.512814 master-1 kubenswrapper[4740]: I1014 13:34:45.512610 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-1"] Oct 14 13:34:45.518852 master-1 kubenswrapper[4740]: W1014 13:34:45.518792 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0007ecc_8302_4bb6_a9dc_dbc9d40b87cf.slice/crio-4d0b8c338f830fd750e0126634667486830ad2fed6180417102349631d5d277f WatchSource:0}: Error finding container 4d0b8c338f830fd750e0126634667486830ad2fed6180417102349631d5d277f: Status 404 returned error can't find the container with id 4d0b8c338f830fd750e0126634667486830ad2fed6180417102349631d5d277f Oct 14 13:34:46.466006 master-1 kubenswrapper[4740]: I1014 13:34:46.465920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf","Type":"ContainerStarted","Data":"0eb72065f9c8f9ed9c7a5bde6b62b91e78b9ede72c49e9bd8f120138c3edab97"} Oct 14 13:34:46.466006 master-1 kubenswrapper[4740]: I1014 13:34:46.465995 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf","Type":"ContainerStarted","Data":"4d0b8c338f830fd750e0126634667486830ad2fed6180417102349631d5d277f"} Oct 14 13:34:46.547640 master-1 kubenswrapper[4740]: I1014 13:34:46.546317 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 14 13:34:46.549888 master-1 kubenswrapper[4740]: I1014 13:34:46.549827 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.553171 master-1 kubenswrapper[4740]: I1014 13:34:46.553098 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 14 13:34:46.553267 master-1 kubenswrapper[4740]: I1014 13:34:46.553199 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 14 13:34:46.553267 master-1 kubenswrapper[4740]: I1014 13:34:46.553255 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 14 13:34:46.638001 master-1 kubenswrapper[4740]: I1014 13:34:46.637906 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 14 13:34:46.709832 master-1 kubenswrapper[4740]: I1014 13:34:46.709600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.709832 master-1 kubenswrapper[4740]: I1014 13:34:46.709676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtm49\" (UniqueName: \"kubernetes.io/projected/f0a85212-864a-4555-bec5-00c2dff52beb-kube-api-access-jtm49\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.709832 master-1 kubenswrapper[4740]: I1014 13:34:46.709727 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86e61661-5ad7-410b-954a-f5badddc4d8c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d17e2199-31e1-47e8-8893-a5a1ead0207d\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.710158 master-1 kubenswrapper[4740]: I1014 13:34:46.710037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a85212-864a-4555-bec5-00c2dff52beb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.710158 master-1 kubenswrapper[4740]: I1014 13:34:46.710121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f0a85212-864a-4555-bec5-00c2dff52beb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.710158 master-1 kubenswrapper[4740]: I1014 13:34:46.710145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.710311 master-1 kubenswrapper[4740]: I1014 13:34:46.710269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.710890 master-1 kubenswrapper[4740]: I1014 13:34:46.710366 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0a85212-864a-4555-bec5-00c2dff52beb-config\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.811739 master-1 kubenswrapper[4740]: I1014 13:34:46.811664 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a85212-864a-4555-bec5-00c2dff52beb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.811739 master-1 kubenswrapper[4740]: I1014 13:34:46.811718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f0a85212-864a-4555-bec5-00c2dff52beb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.811739 master-1 kubenswrapper[4740]: I1014 13:34:46.811742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.811770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.811807 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0a85212-864a-4555-bec5-00c2dff52beb-config\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.811878 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.811919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtm49\" (UniqueName: \"kubernetes.io/projected/f0a85212-864a-4555-bec5-00c2dff52beb-kube-api-access-jtm49\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.811954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86e61661-5ad7-410b-954a-f5badddc4d8c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d17e2199-31e1-47e8-8893-a5a1ead0207d\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.812744 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f0a85212-864a-4555-bec5-00c2dff52beb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.813653 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0a85212-864a-4555-bec5-00c2dff52beb-config\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.813870 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a85212-864a-4555-bec5-00c2dff52beb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.814496 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:34:46.814799 master-1 kubenswrapper[4740]: I1014 13:34:46.814541 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86e61661-5ad7-410b-954a-f5badddc4d8c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d17e2199-31e1-47e8-8893-a5a1ead0207d\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/373df24b57d29060e0f923fbb315d380d40c7d618716e1f8e8d7037509092265/globalmount\"" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.816104 master-1 kubenswrapper[4740]: I1014 13:34:46.815927 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.823410 master-1 kubenswrapper[4740]: I1014 13:34:46.823299 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.826490 master-1 kubenswrapper[4740]: I1014 13:34:46.825147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a85212-864a-4555-bec5-00c2dff52beb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:46.836219 master-1 kubenswrapper[4740]: I1014 13:34:46.836110 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtm49\" (UniqueName: \"kubernetes.io/projected/f0a85212-864a-4555-bec5-00c2dff52beb-kube-api-access-jtm49\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:47.554367 master-1 kubenswrapper[4740]: I1014 13:34:47.553785 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-1" Oct 14 13:34:48.290893 master-1 kubenswrapper[4740]: I1014 13:34:48.290793 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86e61661-5ad7-410b-954a-f5badddc4d8c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^d17e2199-31e1-47e8-8893-a5a1ead0207d\") pod \"ovsdbserver-nb-2\" (UID: \"f0a85212-864a-4555-bec5-00c2dff52beb\") " pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:48.370450 master-1 kubenswrapper[4740]: I1014 13:34:48.370377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:48.401152 master-1 kubenswrapper[4740]: I1014 13:34:48.401084 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 13:34:48.403335 master-1 kubenswrapper[4740]: I1014 13:34:48.403294 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.407690 master-1 kubenswrapper[4740]: I1014 13:34:48.407634 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 14 13:34:48.407762 master-1 kubenswrapper[4740]: I1014 13:34:48.407715 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 14 13:34:48.408144 master-1 kubenswrapper[4740]: I1014 13:34:48.408087 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 14 13:34:48.543975 master-1 kubenswrapper[4740]: I1014 13:34:48.539054 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 13:34:48.849223 master-1 kubenswrapper[4740]: I1014 13:34:48.849154 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3948248-f8c9-441f-bf67-0ff1253f6465-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849458 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3948248-f8c9-441f-bf67-0ff1253f6465-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849492 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpwn\" (UniqueName: \"kubernetes.io/projected/c3948248-f8c9-441f-bf67-0ff1253f6465-kube-api-access-ghpwn\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3948248-f8c9-441f-bf67-0ff1253f6465-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849701 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c52374ad-762b-4288-b8c9-8dd794a180ac\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a53ed419-28ae-4fde-902d-a92118710b67\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.850017 master-1 kubenswrapper[4740]: I1014 13:34:48.849809 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.952387 master-1 kubenswrapper[4740]: I1014 13:34:48.952283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3948248-f8c9-441f-bf67-0ff1253f6465-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.952387 master-1 kubenswrapper[4740]: I1014 13:34:48.952382 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.952420 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3948248-f8c9-441f-bf67-0ff1253f6465-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.952445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghpwn\" (UniqueName: \"kubernetes.io/projected/c3948248-f8c9-441f-bf67-0ff1253f6465-kube-api-access-ghpwn\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.952479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3948248-f8c9-441f-bf67-0ff1253f6465-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.952527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.952552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c52374ad-762b-4288-b8c9-8dd794a180ac\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a53ed419-28ae-4fde-902d-a92118710b67\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.952593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.953553 master-1 kubenswrapper[4740]: I1014 13:34:48.953074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3948248-f8c9-441f-bf67-0ff1253f6465-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.954468 master-1 kubenswrapper[4740]: I1014 13:34:48.954350 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3948248-f8c9-441f-bf67-0ff1253f6465-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.954468 master-1 kubenswrapper[4740]: I1014 13:34:48.954406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3948248-f8c9-441f-bf67-0ff1253f6465-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.955456 master-1 kubenswrapper[4740]: I1014 13:34:48.955391 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:34:48.955590 master-1 kubenswrapper[4740]: I1014 13:34:48.955460 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c52374ad-762b-4288-b8c9-8dd794a180ac\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a53ed419-28ae-4fde-902d-a92118710b67\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fbe460f50700c3f7f33833e8303ae0336c2632d79db6cf4e749dfe1d67d4bcd6/globalmount\"" pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.957265 master-1 kubenswrapper[4740]: I1014 13:34:48.957136 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.957692 master-1 kubenswrapper[4740]: I1014 13:34:48.957634 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.958290 master-1 kubenswrapper[4740]: I1014 13:34:48.958170 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3948248-f8c9-441f-bf67-0ff1253f6465-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:48.993222 master-1 kubenswrapper[4740]: I1014 13:34:48.993129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghpwn\" (UniqueName: \"kubernetes.io/projected/c3948248-f8c9-441f-bf67-0ff1253f6465-kube-api-access-ghpwn\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:49.317314 master-1 kubenswrapper[4740]: I1014 13:34:49.317218 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Oct 14 13:34:49.328074 master-1 kubenswrapper[4740]: W1014 13:34:49.328009 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0a85212_864a_4555_bec5_00c2dff52beb.slice/crio-145c63dd4f7c117770976c171a2447cf1d1c71eefd04bcba6a866c0ed6892278 WatchSource:0}: Error finding container 145c63dd4f7c117770976c171a2447cf1d1c71eefd04bcba6a866c0ed6892278: Status 404 returned error can't find the container with id 145c63dd4f7c117770976c171a2447cf1d1c71eefd04bcba6a866c0ed6892278 Oct 14 13:34:49.497575 master-1 kubenswrapper[4740]: I1014 13:34:49.497439 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"f0a85212-864a-4555-bec5-00c2dff52beb","Type":"ContainerStarted","Data":"145c63dd4f7c117770976c171a2447cf1d1c71eefd04bcba6a866c0ed6892278"} Oct 14 13:34:49.501689 master-1 kubenswrapper[4740]: I1014 13:34:49.501613 4740 generic.go:334] "Generic (PLEG): container finished" podID="802c2dc1-61e5-470b-973f-73f3ec26c14d" containerID="7825df0b7ee4e7dc5dd182cf6b7623acf6e66f023d8e4b6ea632e0685fcd0b25" exitCode=0 Oct 14 13:34:49.501689 master-1 kubenswrapper[4740]: I1014 13:34:49.501677 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"802c2dc1-61e5-470b-973f-73f3ec26c14d","Type":"ContainerDied","Data":"7825df0b7ee4e7dc5dd182cf6b7623acf6e66f023d8e4b6ea632e0685fcd0b25"} Oct 14 13:34:50.338659 master-1 kubenswrapper[4740]: I1014 13:34:50.338583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c52374ad-762b-4288-b8c9-8dd794a180ac\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a53ed419-28ae-4fde-902d-a92118710b67\") pod \"ovsdbserver-sb-0\" (UID: \"c3948248-f8c9-441f-bf67-0ff1253f6465\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:50.514867 master-1 kubenswrapper[4740]: I1014 13:34:50.514804 4740 generic.go:334] "Generic (PLEG): container finished" podID="d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf" containerID="0eb72065f9c8f9ed9c7a5bde6b62b91e78b9ede72c49e9bd8f120138c3edab97" exitCode=0 Oct 14 13:34:50.515109 master-1 kubenswrapper[4740]: I1014 13:34:50.514903 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf","Type":"ContainerDied","Data":"0eb72065f9c8f9ed9c7a5bde6b62b91e78b9ede72c49e9bd8f120138c3edab97"} Oct 14 13:34:50.518268 master-1 kubenswrapper[4740]: I1014 13:34:50.518123 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-2" event={"ID":"802c2dc1-61e5-470b-973f-73f3ec26c14d","Type":"ContainerStarted","Data":"1a3c57f36c249cc1a579e6c19bc0f1bd21ecd5d347e81cd0277862a5e819cdbd"} Oct 14 13:34:50.578591 master-1 kubenswrapper[4740]: I1014 13:34:50.578532 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:51.531369 master-1 kubenswrapper[4740]: I1014 13:34:51.531221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"f0a85212-864a-4555-bec5-00c2dff52beb","Type":"ContainerStarted","Data":"ca639135ed8fa123dec22ba02204d1572ca3db0af6a47269f8bbf6d3ca2a2cf4"} Oct 14 13:34:51.531369 master-1 kubenswrapper[4740]: I1014 13:34:51.531345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"f0a85212-864a-4555-bec5-00c2dff52beb","Type":"ContainerStarted","Data":"a7ec8f734fd086ec6efa062f1563ad6c2fe5f6cad73a4a33d678c1a87237419a"} Oct 14 13:34:51.534921 master-1 kubenswrapper[4740]: I1014 13:34:51.534873 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-1" event={"ID":"d0007ecc-8302-4bb6-a9dc-dbc9d40b87cf","Type":"ContainerStarted","Data":"70d09ed40e9898cdd0adcc02be364b95ce33d79ffcd7ec19adcf5828c2187d75"} Oct 14 13:34:52.124400 master-1 kubenswrapper[4740]: I1014 13:34:52.124199 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-2" podStartSLOduration=35.201839446 podStartE2EDuration="39.124170879s" podCreationTimestamp="2025-10-14 13:34:13 +0000 UTC" firstStartedPulling="2025-10-14 13:34:41.149847887 +0000 UTC m=+1706.960137216" lastFinishedPulling="2025-10-14 13:34:45.07217932 +0000 UTC m=+1710.882468649" observedRunningTime="2025-10-14 13:34:52.11698518 +0000 UTC m=+1717.927274549" watchObservedRunningTime="2025-10-14 13:34:52.124170879 +0000 UTC m=+1717.934460258" Oct 14 13:34:52.874203 master-1 kubenswrapper[4740]: I1014 13:34:52.874058 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=22.881408123 podStartE2EDuration="23.874026955s" podCreationTimestamp="2025-10-14 13:34:29 +0000 UTC" firstStartedPulling="2025-10-14 13:34:49.347916352 +0000 UTC m=+1715.158205681" lastFinishedPulling="2025-10-14 13:34:50.340535174 +0000 UTC m=+1716.150824513" observedRunningTime="2025-10-14 13:34:52.153672139 +0000 UTC m=+1717.963961578" watchObservedRunningTime="2025-10-14 13:34:52.874026955 +0000 UTC m=+1718.684316324" Oct 14 13:34:53.371137 master-1 kubenswrapper[4740]: I1014 13:34:53.371024 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:53.594432 master-1 kubenswrapper[4740]: I1014 13:34:53.594099 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-1" podStartSLOduration=39.594043382 podStartE2EDuration="39.594043382s" podCreationTimestamp="2025-10-14 13:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:53.582920238 +0000 UTC m=+1719.393209597" watchObservedRunningTime="2025-10-14 13:34:53.594043382 +0000 UTC m=+1719.404332721" Oct 14 13:34:53.914628 master-1 kubenswrapper[4740]: W1014 13:34:53.914566 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3948248_f8c9_441f_bf67_0ff1253f6465.slice/crio-5957eefa87564d521bbc14a9c4862d386c071cfe29cf0cba9ee921f320c774c4 WatchSource:0}: Error finding container 5957eefa87564d521bbc14a9c4862d386c071cfe29cf0cba9ee921f320c774c4: Status 404 returned error can't find the container with id 5957eefa87564d521bbc14a9c4862d386c071cfe29cf0cba9ee921f320c774c4 Oct 14 13:34:54.360118 master-1 kubenswrapper[4740]: I1014 13:34:54.360007 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 13:34:54.371095 master-1 kubenswrapper[4740]: I1014 13:34:54.371003 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:54.436641 master-1 kubenswrapper[4740]: I1014 13:34:54.436565 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:54.489393 master-1 kubenswrapper[4740]: I1014 13:34:54.485547 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:54.489393 master-1 kubenswrapper[4740]: I1014 13:34:54.485666 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-1" Oct 14 13:34:54.565578 master-1 kubenswrapper[4740]: I1014 13:34:54.565483 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3948248-f8c9-441f-bf67-0ff1253f6465","Type":"ContainerStarted","Data":"5957eefa87564d521bbc14a9c4862d386c071cfe29cf0cba9ee921f320c774c4"} Oct 14 13:34:55.639571 master-1 kubenswrapper[4740]: I1014 13:34:55.639485 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Oct 14 13:34:56.585389 master-1 kubenswrapper[4740]: I1014 13:34:56.585303 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3948248-f8c9-441f-bf67-0ff1253f6465","Type":"ContainerStarted","Data":"613e6aed355bcb75bf47f6d5ffa33a080a6e17dcf16bdfae54e7b2b51df782b5"} Oct 14 13:34:56.585389 master-1 kubenswrapper[4740]: I1014 13:34:56.585391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3948248-f8c9-441f-bf67-0ff1253f6465","Type":"ContainerStarted","Data":"57b25ff9a9194a4816962cda7c859a9afa536572b80fa91735ab7a8b922b92b9"} Oct 14 13:34:56.623729 master-1 kubenswrapper[4740]: I1014 13:34:56.623632 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=24.683783041 podStartE2EDuration="26.623607103s" podCreationTimestamp="2025-10-14 13:34:30 +0000 UTC" firstStartedPulling="2025-10-14 13:34:53.917207193 +0000 UTC m=+1719.727496562" lastFinishedPulling="2025-10-14 13:34:55.857031305 +0000 UTC m=+1721.667320624" observedRunningTime="2025-10-14 13:34:56.618506368 +0000 UTC m=+1722.428795697" watchObservedRunningTime="2025-10-14 13:34:56.623607103 +0000 UTC m=+1722.433896422" Oct 14 13:34:59.580188 master-1 kubenswrapper[4740]: I1014 13:34:59.580088 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:59.656990 master-1 kubenswrapper[4740]: I1014 13:34:59.655628 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:59.656990 master-1 kubenswrapper[4740]: I1014 13:34:59.656022 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 14 13:34:59.961628 master-1 kubenswrapper[4740]: I1014 13:34:59.961407 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-2" Oct 14 13:34:59.961628 master-1 kubenswrapper[4740]: I1014 13:34:59.961534 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-2" Oct 14 13:35:01.633046 master-1 kubenswrapper[4740]: I1014 13:35:01.632866 4740 generic.go:334] "Generic (PLEG): container finished" podID="99ce92c4-34cd-4599-9614-10e7663bd9e7" containerID="806c6e8d87e1e46b164716290b20250688aecdf887e4ef88d1340fa437ddd895" exitCode=0 Oct 14 13:35:01.633046 master-1 kubenswrapper[4740]: I1014 13:35:01.632951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99ce92c4-34cd-4599-9614-10e7663bd9e7","Type":"ContainerDied","Data":"806c6e8d87e1e46b164716290b20250688aecdf887e4ef88d1340fa437ddd895"} Oct 14 13:35:02.644865 master-1 kubenswrapper[4740]: I1014 13:35:02.644753 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"99ce92c4-34cd-4599-9614-10e7663bd9e7","Type":"ContainerStarted","Data":"61f3ba93b02d73464301da13cc205f5c625dc188342fac1ec157d2fe0a477dde"} Oct 14 13:35:02.645858 master-1 kubenswrapper[4740]: I1014 13:35:02.645079 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Oct 14 13:35:02.680476 master-1 kubenswrapper[4740]: I1014 13:35:02.680371 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.650867442 podStartE2EDuration="51.680345251s" podCreationTimestamp="2025-10-14 13:34:11 +0000 UTC" firstStartedPulling="2025-10-14 13:34:25.036859586 +0000 UTC m=+1690.847148915" lastFinishedPulling="2025-10-14 13:34:28.066337395 +0000 UTC m=+1693.876626724" observedRunningTime="2025-10-14 13:35:02.67915322 +0000 UTC m=+1728.489442539" watchObservedRunningTime="2025-10-14 13:35:02.680345251 +0000 UTC m=+1728.490634630" Oct 14 13:35:05.418678 master-1 kubenswrapper[4740]: I1014 13:35:05.418537 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-26cmc" podUID="a8c155bd-baa3-49a7-bada-ec4d01119872" containerName="ovn-controller" probeResult="failure" output=< Oct 14 13:35:05.418678 master-1 kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 13:35:05.418678 master-1 kubenswrapper[4740]: > Oct 14 13:35:05.624421 master-1 kubenswrapper[4740]: I1014 13:35:05.624346 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 14 13:35:06.578386 master-1 kubenswrapper[4740]: I1014 13:35:06.578329 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-2" Oct 14 13:35:06.636268 master-1 kubenswrapper[4740]: I1014 13:35:06.636198 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-2" Oct 14 13:35:08.181648 master-1 kubenswrapper[4740]: I1014 13:35:08.181582 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6577479d4f-25bpt"] Oct 14 13:35:08.185274 master-1 kubenswrapper[4740]: I1014 13:35:08.185241 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.188759 master-1 kubenswrapper[4740]: I1014 13:35:08.188489 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 13:35:08.188759 master-1 kubenswrapper[4740]: I1014 13:35:08.188706 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 13:35:08.202253 master-1 kubenswrapper[4740]: I1014 13:35:08.198963 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6577479d4f-25bpt"] Oct 14 13:35:08.365275 master-1 kubenswrapper[4740]: I1014 13:35:08.365200 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-dns-svc\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.365488 master-1 kubenswrapper[4740]: I1014 13:35:08.365378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-sb\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.365488 master-1 kubenswrapper[4740]: I1014 13:35:08.365420 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-config\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.365488 master-1 kubenswrapper[4740]: I1014 13:35:08.365456 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-nb\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.365627 master-1 kubenswrapper[4740]: I1014 13:35:08.365580 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8bf\" (UniqueName: \"kubernetes.io/projected/ea346082-3d5b-4eec-bc76-e69e6c45b08a-kube-api-access-hz8bf\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.467441 master-1 kubenswrapper[4740]: I1014 13:35:08.467333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8bf\" (UniqueName: \"kubernetes.io/projected/ea346082-3d5b-4eec-bc76-e69e6c45b08a-kube-api-access-hz8bf\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.467441 master-1 kubenswrapper[4740]: I1014 13:35:08.467451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-dns-svc\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.467891 master-1 kubenswrapper[4740]: I1014 13:35:08.467522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-sb\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.467891 master-1 kubenswrapper[4740]: I1014 13:35:08.467543 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-config\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.467891 master-1 kubenswrapper[4740]: I1014 13:35:08.467570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-nb\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.468516 master-1 kubenswrapper[4740]: I1014 13:35:08.468447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-nb\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.468894 master-1 kubenswrapper[4740]: I1014 13:35:08.468840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-config\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.469071 master-1 kubenswrapper[4740]: I1014 13:35:08.469038 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-dns-svc\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.470008 master-1 kubenswrapper[4740]: I1014 13:35:08.469951 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-sb\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.500372 master-1 kubenswrapper[4740]: I1014 13:35:08.500308 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8bf\" (UniqueName: \"kubernetes.io/projected/ea346082-3d5b-4eec-bc76-e69e6c45b08a-kube-api-access-hz8bf\") pod \"dnsmasq-dns-6577479d4f-25bpt\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:08.523723 master-1 kubenswrapper[4740]: I1014 13:35:08.523669 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:09.060221 master-1 kubenswrapper[4740]: I1014 13:35:09.060046 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6577479d4f-25bpt"] Oct 14 13:35:09.297847 master-1 kubenswrapper[4740]: W1014 13:35:09.297780 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea346082_3d5b_4eec_bc76_e69e6c45b08a.slice/crio-14b79be0f49e4eb8e67b84af562de240ee07dae3cbea4e5b9b59ff4c00e2a34e WatchSource:0}: Error finding container 14b79be0f49e4eb8e67b84af562de240ee07dae3cbea4e5b9b59ff4c00e2a34e: Status 404 returned error can't find the container with id 14b79be0f49e4eb8e67b84af562de240ee07dae3cbea4e5b9b59ff4c00e2a34e Oct 14 13:35:09.309296 master-1 kubenswrapper[4740]: I1014 13:35:09.306831 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6577479d4f-25bpt"] Oct 14 13:35:09.724120 master-1 kubenswrapper[4740]: I1014 13:35:09.724063 4740 generic.go:334] "Generic (PLEG): container finished" podID="ea346082-3d5b-4eec-bc76-e69e6c45b08a" containerID="244ac3760032f579e60e9f7ae11b63eb59eac1d08eb397b4951edb380c48fb4b" exitCode=0 Oct 14 13:35:09.724400 master-1 kubenswrapper[4740]: I1014 13:35:09.724322 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" event={"ID":"ea346082-3d5b-4eec-bc76-e69e6c45b08a","Type":"ContainerDied","Data":"244ac3760032f579e60e9f7ae11b63eb59eac1d08eb397b4951edb380c48fb4b"} Oct 14 13:35:09.724560 master-1 kubenswrapper[4740]: I1014 13:35:09.724535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" event={"ID":"ea346082-3d5b-4eec-bc76-e69e6c45b08a","Type":"ContainerStarted","Data":"14b79be0f49e4eb8e67b84af562de240ee07dae3cbea4e5b9b59ff4c00e2a34e"} Oct 14 13:35:10.350832 master-1 kubenswrapper[4740]: I1014 13:35:10.350725 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:10.432265 master-1 kubenswrapper[4740]: I1014 13:35:10.429643 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:35:10.432265 master-1 kubenswrapper[4740]: I1014 13:35:10.431927 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-26cmc" podUID="a8c155bd-baa3-49a7-bada-ec4d01119872" containerName="ovn-controller" probeResult="failure" output=< Oct 14 13:35:10.432265 master-1 kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 13:35:10.432265 master-1 kubenswrapper[4740]: > Oct 14 13:35:10.434859 master-1 kubenswrapper[4740]: I1014 13:35:10.434793 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wpjmb" Oct 14 13:35:10.517499 master-1 kubenswrapper[4740]: I1014 13:35:10.517457 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-nb\") pod \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " Oct 14 13:35:10.517818 master-1 kubenswrapper[4740]: I1014 13:35:10.517787 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz8bf\" (UniqueName: \"kubernetes.io/projected/ea346082-3d5b-4eec-bc76-e69e6c45b08a-kube-api-access-hz8bf\") pod \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " Oct 14 13:35:10.517995 master-1 kubenswrapper[4740]: I1014 13:35:10.517981 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-sb\") pod \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " Oct 14 13:35:10.518176 master-1 kubenswrapper[4740]: I1014 13:35:10.518161 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-dns-svc\") pod \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " Oct 14 13:35:10.518318 master-1 kubenswrapper[4740]: I1014 13:35:10.518303 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-config\") pod \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\" (UID: \"ea346082-3d5b-4eec-bc76-e69e6c45b08a\") " Oct 14 13:35:10.521861 master-1 kubenswrapper[4740]: I1014 13:35:10.521797 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea346082-3d5b-4eec-bc76-e69e6c45b08a-kube-api-access-hz8bf" (OuterVolumeSpecName: "kube-api-access-hz8bf") pod "ea346082-3d5b-4eec-bc76-e69e6c45b08a" (UID: "ea346082-3d5b-4eec-bc76-e69e6c45b08a"). InnerVolumeSpecName "kube-api-access-hz8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:10.536504 master-1 kubenswrapper[4740]: I1014 13:35:10.536395 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea346082-3d5b-4eec-bc76-e69e6c45b08a" (UID: "ea346082-3d5b-4eec-bc76-e69e6c45b08a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:10.541512 master-1 kubenswrapper[4740]: I1014 13:35:10.541188 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-config" (OuterVolumeSpecName: "config") pod "ea346082-3d5b-4eec-bc76-e69e6c45b08a" (UID: "ea346082-3d5b-4eec-bc76-e69e6c45b08a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:10.541512 master-1 kubenswrapper[4740]: I1014 13:35:10.541319 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea346082-3d5b-4eec-bc76-e69e6c45b08a" (UID: "ea346082-3d5b-4eec-bc76-e69e6c45b08a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:10.544727 master-1 kubenswrapper[4740]: I1014 13:35:10.544689 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea346082-3d5b-4eec-bc76-e69e6c45b08a" (UID: "ea346082-3d5b-4eec-bc76-e69e6c45b08a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:10.620440 master-1 kubenswrapper[4740]: I1014 13:35:10.620193 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:10.620440 master-1 kubenswrapper[4740]: I1014 13:35:10.620255 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:10.620440 master-1 kubenswrapper[4740]: I1014 13:35:10.620266 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:10.620440 master-1 kubenswrapper[4740]: I1014 13:35:10.620274 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea346082-3d5b-4eec-bc76-e69e6c45b08a-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:10.620440 master-1 kubenswrapper[4740]: I1014 13:35:10.620285 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz8bf\" (UniqueName: \"kubernetes.io/projected/ea346082-3d5b-4eec-bc76-e69e6c45b08a-kube-api-access-hz8bf\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:10.735566 master-1 kubenswrapper[4740]: I1014 13:35:10.735506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" event={"ID":"ea346082-3d5b-4eec-bc76-e69e6c45b08a","Type":"ContainerDied","Data":"14b79be0f49e4eb8e67b84af562de240ee07dae3cbea4e5b9b59ff4c00e2a34e"} Oct 14 13:35:10.735794 master-1 kubenswrapper[4740]: I1014 13:35:10.735565 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6577479d4f-25bpt" Oct 14 13:35:10.735794 master-1 kubenswrapper[4740]: I1014 13:35:10.735578 4740 scope.go:117] "RemoveContainer" containerID="244ac3760032f579e60e9f7ae11b63eb59eac1d08eb397b4951edb380c48fb4b" Oct 14 13:35:10.953311 master-1 kubenswrapper[4740]: I1014 13:35:10.953250 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6577479d4f-25bpt"] Oct 14 13:35:11.007813 master-1 kubenswrapper[4740]: I1014 13:35:11.007725 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6577479d4f-25bpt"] Oct 14 13:35:12.755431 master-1 kubenswrapper[4740]: I1014 13:35:12.755336 4740 generic.go:334] "Generic (PLEG): container finished" podID="22432753-e8c4-45ce-8e09-f9d497dc8c8b" containerID="f61c0813208d698459e3a84cadb239c6e5d819652eeebc235b87668c5ee8b28a" exitCode=0 Oct 14 13:35:12.755431 master-1 kubenswrapper[4740]: I1014 13:35:12.755410 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"22432753-e8c4-45ce-8e09-f9d497dc8c8b","Type":"ContainerDied","Data":"f61c0813208d698459e3a84cadb239c6e5d819652eeebc235b87668c5ee8b28a"} Oct 14 13:35:12.962095 master-1 kubenswrapper[4740]: I1014 13:35:12.961979 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea346082-3d5b-4eec-bc76-e69e6c45b08a" path="/var/lib/kubelet/pods/ea346082-3d5b-4eec-bc76-e69e6c45b08a/volumes" Oct 14 13:35:13.767479 master-1 kubenswrapper[4740]: I1014 13:35:13.767404 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-2" event={"ID":"22432753-e8c4-45ce-8e09-f9d497dc8c8b","Type":"ContainerStarted","Data":"f6f4f6c7abab5eabfd462a8dc9523ce319d37af703ae6f379ef821a1f55cbc5e"} Oct 14 13:35:13.768010 master-1 kubenswrapper[4740]: I1014 13:35:13.767693 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:35:15.438703 master-1 kubenswrapper[4740]: I1014 13:35:15.438613 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-26cmc" podUID="a8c155bd-baa3-49a7-bada-ec4d01119872" containerName="ovn-controller" probeResult="failure" output=< Oct 14 13:35:15.438703 master-1 kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 13:35:15.438703 master-1 kubenswrapper[4740]: > Oct 14 13:35:16.090444 master-1 kubenswrapper[4740]: I1014 13:35:16.090356 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-2" podStartSLOduration=65.09033422 podStartE2EDuration="1m5.09033422s" podCreationTimestamp="2025-10-14 13:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:14.874857109 +0000 UTC m=+1740.685146448" watchObservedRunningTime="2025-10-14 13:35:16.09033422 +0000 UTC m=+1741.900623579" Oct 14 13:35:16.097643 master-1 kubenswrapper[4740]: I1014 13:35:16.097583 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5ftz8"] Oct 14 13:35:16.098103 master-1 kubenswrapper[4740]: E1014 13:35:16.098076 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea346082-3d5b-4eec-bc76-e69e6c45b08a" containerName="init" Oct 14 13:35:16.098160 master-1 kubenswrapper[4740]: I1014 13:35:16.098108 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea346082-3d5b-4eec-bc76-e69e6c45b08a" containerName="init" Oct 14 13:35:16.098435 master-1 kubenswrapper[4740]: I1014 13:35:16.098410 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea346082-3d5b-4eec-bc76-e69e6c45b08a" containerName="init" Oct 14 13:35:16.099509 master-1 kubenswrapper[4740]: I1014 13:35:16.099476 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.107921 master-1 kubenswrapper[4740]: I1014 13:35:16.107877 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 14 13:35:16.108059 master-1 kubenswrapper[4740]: I1014 13:35:16.108003 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 14 13:35:16.108099 master-1 kubenswrapper[4740]: I1014 13:35:16.108024 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 13:35:16.108491 master-1 kubenswrapper[4740]: I1014 13:35:16.108460 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 13:35:16.109773 master-1 kubenswrapper[4740]: I1014 13:35:16.109750 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5ftz8"] Oct 14 13:35:16.236423 master-1 kubenswrapper[4740]: I1014 13:35:16.236362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-dispersionconf\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.236682 master-1 kubenswrapper[4740]: I1014 13:35:16.236548 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/05917721-13c9-4d5c-93a6-b00662018163-etc-swift\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.236682 master-1 kubenswrapper[4740]: I1014 13:35:16.236645 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-ring-data-devices\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.236760 master-1 kubenswrapper[4740]: I1014 13:35:16.236741 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpkxp\" (UniqueName: \"kubernetes.io/projected/05917721-13c9-4d5c-93a6-b00662018163-kube-api-access-cpkxp\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.236919 master-1 kubenswrapper[4740]: I1014 13:35:16.236875 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-scripts\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.236994 master-1 kubenswrapper[4740]: I1014 13:35:16.236937 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-combined-ca-bundle\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.237134 master-1 kubenswrapper[4740]: I1014 13:35:16.237112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-swiftconf\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.338678 master-1 kubenswrapper[4740]: I1014 13:35:16.338619 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-dispersionconf\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.338915 master-1 kubenswrapper[4740]: I1014 13:35:16.338706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/05917721-13c9-4d5c-93a6-b00662018163-etc-swift\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.338915 master-1 kubenswrapper[4740]: I1014 13:35:16.338754 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-ring-data-devices\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.338915 master-1 kubenswrapper[4740]: I1014 13:35:16.338797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpkxp\" (UniqueName: \"kubernetes.io/projected/05917721-13c9-4d5c-93a6-b00662018163-kube-api-access-cpkxp\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.338915 master-1 kubenswrapper[4740]: I1014 13:35:16.338841 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-scripts\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.338915 master-1 kubenswrapper[4740]: I1014 13:35:16.338864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-combined-ca-bundle\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.339115 master-1 kubenswrapper[4740]: I1014 13:35:16.338925 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-swiftconf\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.339350 master-1 kubenswrapper[4740]: I1014 13:35:16.339304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/05917721-13c9-4d5c-93a6-b00662018163-etc-swift\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.340050 master-1 kubenswrapper[4740]: I1014 13:35:16.340010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-ring-data-devices\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.340244 master-1 kubenswrapper[4740]: I1014 13:35:16.340113 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-scripts\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.343544 master-1 kubenswrapper[4740]: I1014 13:35:16.343455 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-dispersionconf\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.344362 master-1 kubenswrapper[4740]: I1014 13:35:16.344323 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-combined-ca-bundle\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.345597 master-1 kubenswrapper[4740]: I1014 13:35:16.345569 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-swiftconf\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.450670 master-1 kubenswrapper[4740]: I1014 13:35:16.450609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpkxp\" (UniqueName: \"kubernetes.io/projected/05917721-13c9-4d5c-93a6-b00662018163-kube-api-access-cpkxp\") pod \"swift-ring-rebalance-5ftz8\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:16.728029 master-1 kubenswrapper[4740]: I1014 13:35:16.727898 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:17.200778 master-1 kubenswrapper[4740]: I1014 13:35:17.200710 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5ftz8"] Oct 14 13:35:17.217994 master-1 kubenswrapper[4740]: W1014 13:35:17.217898 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05917721_13c9_4d5c_93a6_b00662018163.slice/crio-bbffabb7e7ff05bfb7adbb8806ee1d52a745478196967bd8de8c6e82f4889a41 WatchSource:0}: Error finding container bbffabb7e7ff05bfb7adbb8806ee1d52a745478196967bd8de8c6e82f4889a41: Status 404 returned error can't find the container with id bbffabb7e7ff05bfb7adbb8806ee1d52a745478196967bd8de8c6e82f4889a41 Oct 14 13:35:17.811490 master-1 kubenswrapper[4740]: I1014 13:35:17.811399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5ftz8" event={"ID":"05917721-13c9-4d5c-93a6-b00662018163","Type":"ContainerStarted","Data":"bbffabb7e7ff05bfb7adbb8806ee1d52a745478196967bd8de8c6e82f4889a41"} Oct 14 13:35:18.961495 master-1 kubenswrapper[4740]: I1014 13:35:18.961383 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-1" Oct 14 13:35:19.001894 master-1 kubenswrapper[4740]: I1014 13:35:19.001818 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-1" Oct 14 13:35:19.852351 master-1 kubenswrapper[4740]: I1014 13:35:19.852307 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bf7489945-tjzl4"] Oct 14 13:35:19.852764 master-1 kubenswrapper[4740]: I1014 13:35:19.852737 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerName="dnsmasq-dns" containerID="cri-o://a447d8cc4e2a7e8ade746b9ea249b20641c4fc29b36f3d6cd5430d60baa9ad7b" gracePeriod=10 Oct 14 13:35:20.464632 master-1 kubenswrapper[4740]: I1014 13:35:20.464568 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-26cmc" podUID="a8c155bd-baa3-49a7-bada-ec4d01119872" containerName="ovn-controller" probeResult="failure" output=< Oct 14 13:35:20.464632 master-1 kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 13:35:20.464632 master-1 kubenswrapper[4740]: > Oct 14 13:35:20.856156 master-1 kubenswrapper[4740]: I1014 13:35:20.856085 4740 generic.go:334] "Generic (PLEG): container finished" podID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerID="a447d8cc4e2a7e8ade746b9ea249b20641c4fc29b36f3d6cd5430d60baa9ad7b" exitCode=0 Oct 14 13:35:20.856428 master-1 kubenswrapper[4740]: I1014 13:35:20.856162 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" event={"ID":"b2247769-a88f-4909-98d1-2cb5b442c9de","Type":"ContainerDied","Data":"a447d8cc4e2a7e8ade746b9ea249b20641c4fc29b36f3d6cd5430d60baa9ad7b"} Oct 14 13:35:20.915973 master-1 kubenswrapper[4740]: I1014 13:35:20.915892 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Oct 14 13:35:21.147553 master-1 kubenswrapper[4740]: I1014 13:35:21.147203 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:35:21.277085 master-1 kubenswrapper[4740]: I1014 13:35:21.276492 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-config\") pod \"b2247769-a88f-4909-98d1-2cb5b442c9de\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " Oct 14 13:35:21.277085 master-1 kubenswrapper[4740]: I1014 13:35:21.276615 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6vqw\" (UniqueName: \"kubernetes.io/projected/b2247769-a88f-4909-98d1-2cb5b442c9de-kube-api-access-z6vqw\") pod \"b2247769-a88f-4909-98d1-2cb5b442c9de\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " Oct 14 13:35:21.277085 master-1 kubenswrapper[4740]: I1014 13:35:21.276656 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-dns-svc\") pod \"b2247769-a88f-4909-98d1-2cb5b442c9de\" (UID: \"b2247769-a88f-4909-98d1-2cb5b442c9de\") " Oct 14 13:35:21.283912 master-1 kubenswrapper[4740]: I1014 13:35:21.283815 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2247769-a88f-4909-98d1-2cb5b442c9de-kube-api-access-z6vqw" (OuterVolumeSpecName: "kube-api-access-z6vqw") pod "b2247769-a88f-4909-98d1-2cb5b442c9de" (UID: "b2247769-a88f-4909-98d1-2cb5b442c9de"). InnerVolumeSpecName "kube-api-access-z6vqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:21.316077 master-1 kubenswrapper[4740]: I1014 13:35:21.316011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2247769-a88f-4909-98d1-2cb5b442c9de" (UID: "b2247769-a88f-4909-98d1-2cb5b442c9de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:21.319116 master-1 kubenswrapper[4740]: I1014 13:35:21.319053 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-config" (OuterVolumeSpecName: "config") pod "b2247769-a88f-4909-98d1-2cb5b442c9de" (UID: "b2247769-a88f-4909-98d1-2cb5b442c9de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:21.378804 master-1 kubenswrapper[4740]: I1014 13:35:21.378651 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:21.378804 master-1 kubenswrapper[4740]: I1014 13:35:21.378694 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6vqw\" (UniqueName: \"kubernetes.io/projected/b2247769-a88f-4909-98d1-2cb5b442c9de-kube-api-access-z6vqw\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:21.378804 master-1 kubenswrapper[4740]: I1014 13:35:21.378708 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2247769-a88f-4909-98d1-2cb5b442c9de-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:21.868326 master-1 kubenswrapper[4740]: I1014 13:35:21.868215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5ftz8" event={"ID":"05917721-13c9-4d5c-93a6-b00662018163","Type":"ContainerStarted","Data":"b96b56de78278d5c8f8faad7f58681461343e0cc1bbad11f1ce2703769161336"} Oct 14 13:35:21.870761 master-1 kubenswrapper[4740]: I1014 13:35:21.870711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" event={"ID":"b2247769-a88f-4909-98d1-2cb5b442c9de","Type":"ContainerDied","Data":"6183e8fdcfcb5aa531353339b59de3a5a945bba0fdecc57f5de40fb4b70b72b0"} Oct 14 13:35:21.870844 master-1 kubenswrapper[4740]: I1014 13:35:21.870784 4740 scope.go:117] "RemoveContainer" containerID="a447d8cc4e2a7e8ade746b9ea249b20641c4fc29b36f3d6cd5430d60baa9ad7b" Oct 14 13:35:21.871041 master-1 kubenswrapper[4740]: I1014 13:35:21.870997 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf7489945-tjzl4" Oct 14 13:35:21.898957 master-1 kubenswrapper[4740]: I1014 13:35:21.898896 4740 scope.go:117] "RemoveContainer" containerID="cb15e6658f33630641614c47e29b1f962d807bfe907e52320f7cebfdeef74662" Oct 14 13:35:21.900055 master-1 kubenswrapper[4740]: I1014 13:35:21.899982 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5ftz8" podStartSLOduration=3.559565723 podStartE2EDuration="6.899956028s" podCreationTimestamp="2025-10-14 13:35:15 +0000 UTC" firstStartedPulling="2025-10-14 13:35:17.223210058 +0000 UTC m=+1743.033499387" lastFinishedPulling="2025-10-14 13:35:20.563600363 +0000 UTC m=+1746.373889692" observedRunningTime="2025-10-14 13:35:21.894024931 +0000 UTC m=+1747.704314260" watchObservedRunningTime="2025-10-14 13:35:21.899956028 +0000 UTC m=+1747.710245357" Oct 14 13:35:21.932005 master-1 kubenswrapper[4740]: I1014 13:35:21.931930 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bf7489945-tjzl4"] Oct 14 13:35:21.942850 master-1 kubenswrapper[4740]: I1014 13:35:21.942793 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bf7489945-tjzl4"] Oct 14 13:35:22.953261 master-1 kubenswrapper[4740]: I1014 13:35:22.953172 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" path="/var/lib/kubelet/pods/b2247769-a88f-4909-98d1-2cb5b442c9de/volumes" Oct 14 13:35:24.668568 master-1 kubenswrapper[4740]: I1014 13:35:24.668420 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-26cmc-config-wxcn9"] Oct 14 13:35:24.669089 master-1 kubenswrapper[4740]: E1014 13:35:24.668813 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerName="init" Oct 14 13:35:24.669089 master-1 kubenswrapper[4740]: I1014 13:35:24.668826 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerName="init" Oct 14 13:35:24.669089 master-1 kubenswrapper[4740]: E1014 13:35:24.668886 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerName="dnsmasq-dns" Oct 14 13:35:24.669089 master-1 kubenswrapper[4740]: I1014 13:35:24.668893 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerName="dnsmasq-dns" Oct 14 13:35:24.669089 master-1 kubenswrapper[4740]: I1014 13:35:24.669065 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2247769-a88f-4909-98d1-2cb5b442c9de" containerName="dnsmasq-dns" Oct 14 13:35:24.669817 master-1 kubenswrapper[4740]: I1014 13:35:24.669777 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.672180 master-1 kubenswrapper[4740]: I1014 13:35:24.672142 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 13:35:24.686859 master-1 kubenswrapper[4740]: I1014 13:35:24.686808 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-26cmc-config-wxcn9"] Oct 14 13:35:24.848565 master-1 kubenswrapper[4740]: I1014 13:35:24.848478 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcn6g\" (UniqueName: \"kubernetes.io/projected/456de4fc-6251-4ab7-b211-c564642c6c82-kube-api-access-kcn6g\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.848769 master-1 kubenswrapper[4740]: I1014 13:35:24.848668 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run-ovn\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.848809 master-1 kubenswrapper[4740]: I1014 13:35:24.848767 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.848857 master-1 kubenswrapper[4740]: I1014 13:35:24.848837 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-additional-scripts\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.848903 master-1 kubenswrapper[4740]: I1014 13:35:24.848882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-log-ovn\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.848933 master-1 kubenswrapper[4740]: I1014 13:35:24.848903 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-scripts\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950167 master-1 kubenswrapper[4740]: I1014 13:35:24.950033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-log-ovn\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950167 master-1 kubenswrapper[4740]: I1014 13:35:24.950104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-scripts\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950427 master-1 kubenswrapper[4740]: I1014 13:35:24.950197 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-log-ovn\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950427 master-1 kubenswrapper[4740]: I1014 13:35:24.950290 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcn6g\" (UniqueName: \"kubernetes.io/projected/456de4fc-6251-4ab7-b211-c564642c6c82-kube-api-access-kcn6g\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950427 master-1 kubenswrapper[4740]: I1014 13:35:24.950346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run-ovn\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950427 master-1 kubenswrapper[4740]: I1014 13:35:24.950378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950427 master-1 kubenswrapper[4740]: I1014 13:35:24.950423 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-additional-scripts\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950752 master-1 kubenswrapper[4740]: I1014 13:35:24.950698 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.950808 master-1 kubenswrapper[4740]: I1014 13:35:24.950752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run-ovn\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.951147 master-1 kubenswrapper[4740]: I1014 13:35:24.951111 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-additional-scripts\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:24.952743 master-1 kubenswrapper[4740]: I1014 13:35:24.952694 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-scripts\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:25.439138 master-1 kubenswrapper[4740]: I1014 13:35:25.439074 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-26cmc" podUID="a8c155bd-baa3-49a7-bada-ec4d01119872" containerName="ovn-controller" probeResult="failure" output=< Oct 14 13:35:25.439138 master-1 kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 13:35:25.439138 master-1 kubenswrapper[4740]: > Oct 14 13:35:25.691352 master-1 kubenswrapper[4740]: I1014 13:35:25.691166 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcn6g\" (UniqueName: \"kubernetes.io/projected/456de4fc-6251-4ab7-b211-c564642c6c82-kube-api-access-kcn6g\") pod \"ovn-controller-26cmc-config-wxcn9\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:25.891016 master-1 kubenswrapper[4740]: I1014 13:35:25.890912 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:26.553710 master-1 kubenswrapper[4740]: I1014 13:35:26.553647 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-26cmc-config-wxcn9"] Oct 14 13:35:26.767122 master-1 kubenswrapper[4740]: W1014 13:35:26.766930 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456de4fc_6251_4ab7_b211_c564642c6c82.slice/crio-3c4238f92921527814d576475304bcd79e47022289dd8d00a96eafcfb761b69d WatchSource:0}: Error finding container 3c4238f92921527814d576475304bcd79e47022289dd8d00a96eafcfb761b69d: Status 404 returned error can't find the container with id 3c4238f92921527814d576475304bcd79e47022289dd8d00a96eafcfb761b69d Oct 14 13:35:26.921486 master-1 kubenswrapper[4740]: I1014 13:35:26.921409 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-26cmc-config-wxcn9" event={"ID":"456de4fc-6251-4ab7-b211-c564642c6c82","Type":"ContainerStarted","Data":"3c4238f92921527814d576475304bcd79e47022289dd8d00a96eafcfb761b69d"} Oct 14 13:35:27.939839 master-1 kubenswrapper[4740]: I1014 13:35:27.939768 4740 generic.go:334] "Generic (PLEG): container finished" podID="456de4fc-6251-4ab7-b211-c564642c6c82" containerID="8c1f1d0eaa9bf84d9707b573632a457b76ed1bb1933e088a18e8ca6ccddf7b3d" exitCode=0 Oct 14 13:35:27.939839 master-1 kubenswrapper[4740]: I1014 13:35:27.939832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-26cmc-config-wxcn9" event={"ID":"456de4fc-6251-4ab7-b211-c564642c6c82","Type":"ContainerDied","Data":"8c1f1d0eaa9bf84d9707b573632a457b76ed1bb1933e088a18e8ca6ccddf7b3d"} Oct 14 13:35:28.195579 master-1 kubenswrapper[4740]: I1014 13:35:28.195454 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-2" Oct 14 13:35:28.948492 master-1 kubenswrapper[4740]: I1014 13:35:28.948441 4740 generic.go:334] "Generic (PLEG): container finished" podID="05917721-13c9-4d5c-93a6-b00662018163" containerID="b96b56de78278d5c8f8faad7f58681461343e0cc1bbad11f1ce2703769161336" exitCode=0 Oct 14 13:35:28.960089 master-1 kubenswrapper[4740]: I1014 13:35:28.960024 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5ftz8" event={"ID":"05917721-13c9-4d5c-93a6-b00662018163","Type":"ContainerDied","Data":"b96b56de78278d5c8f8faad7f58681461343e0cc1bbad11f1ce2703769161336"} Oct 14 13:35:29.393417 master-1 kubenswrapper[4740]: I1014 13:35:29.393338 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9jggw"] Oct 14 13:35:29.394421 master-1 kubenswrapper[4740]: I1014 13:35:29.394383 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:29.489241 master-1 kubenswrapper[4740]: I1014 13:35:29.489147 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcm5c\" (UniqueName: \"kubernetes.io/projected/cc7697c8-a46f-40f0-ab6a-e02b46a7a832-kube-api-access-fcm5c\") pod \"neutron-db-create-9jggw\" (UID: \"cc7697c8-a46f-40f0-ab6a-e02b46a7a832\") " pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:29.610156 master-1 kubenswrapper[4740]: I1014 13:35:29.610062 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcm5c\" (UniqueName: \"kubernetes.io/projected/cc7697c8-a46f-40f0-ab6a-e02b46a7a832-kube-api-access-fcm5c\") pod \"neutron-db-create-9jggw\" (UID: \"cc7697c8-a46f-40f0-ab6a-e02b46a7a832\") " pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:29.646213 master-1 kubenswrapper[4740]: I1014 13:35:29.646106 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9jggw"] Oct 14 13:35:29.724755 master-1 kubenswrapper[4740]: I1014 13:35:29.724688 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:29.768058 master-1 kubenswrapper[4740]: I1014 13:35:29.767995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcm5c\" (UniqueName: \"kubernetes.io/projected/cc7697c8-a46f-40f0-ab6a-e02b46a7a832-kube-api-access-fcm5c\") pod \"neutron-db-create-9jggw\" (UID: \"cc7697c8-a46f-40f0-ab6a-e02b46a7a832\") " pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:29.918483 master-1 kubenswrapper[4740]: I1014 13:35:29.918344 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-scripts\") pod \"456de4fc-6251-4ab7-b211-c564642c6c82\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " Oct 14 13:35:29.918483 master-1 kubenswrapper[4740]: I1014 13:35:29.918463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-additional-scripts\") pod \"456de4fc-6251-4ab7-b211-c564642c6c82\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " Oct 14 13:35:29.918735 master-1 kubenswrapper[4740]: I1014 13:35:29.918541 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run-ovn\") pod \"456de4fc-6251-4ab7-b211-c564642c6c82\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " Oct 14 13:35:29.918735 master-1 kubenswrapper[4740]: I1014 13:35:29.918583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run\") pod \"456de4fc-6251-4ab7-b211-c564642c6c82\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " Oct 14 13:35:29.918735 master-1 kubenswrapper[4740]: I1014 13:35:29.918679 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "456de4fc-6251-4ab7-b211-c564642c6c82" (UID: "456de4fc-6251-4ab7-b211-c564642c6c82"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:35:29.918857 master-1 kubenswrapper[4740]: I1014 13:35:29.918772 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcn6g\" (UniqueName: \"kubernetes.io/projected/456de4fc-6251-4ab7-b211-c564642c6c82-kube-api-access-kcn6g\") pod \"456de4fc-6251-4ab7-b211-c564642c6c82\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " Oct 14 13:35:29.918857 master-1 kubenswrapper[4740]: I1014 13:35:29.918810 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-log-ovn\") pod \"456de4fc-6251-4ab7-b211-c564642c6c82\" (UID: \"456de4fc-6251-4ab7-b211-c564642c6c82\") " Oct 14 13:35:29.918943 master-1 kubenswrapper[4740]: I1014 13:35:29.918837 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run" (OuterVolumeSpecName: "var-run") pod "456de4fc-6251-4ab7-b211-c564642c6c82" (UID: "456de4fc-6251-4ab7-b211-c564642c6c82"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:35:29.919000 master-1 kubenswrapper[4740]: I1014 13:35:29.918980 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "456de4fc-6251-4ab7-b211-c564642c6c82" (UID: "456de4fc-6251-4ab7-b211-c564642c6c82"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:35:29.919270 master-1 kubenswrapper[4740]: I1014 13:35:29.919224 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-log-ovn\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:29.919270 master-1 kubenswrapper[4740]: I1014 13:35:29.919265 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run-ovn\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:29.919398 master-1 kubenswrapper[4740]: I1014 13:35:29.919275 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/456de4fc-6251-4ab7-b211-c564642c6c82-var-run\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:29.919398 master-1 kubenswrapper[4740]: I1014 13:35:29.919301 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "456de4fc-6251-4ab7-b211-c564642c6c82" (UID: "456de4fc-6251-4ab7-b211-c564642c6c82"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:29.920255 master-1 kubenswrapper[4740]: I1014 13:35:29.920211 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-scripts" (OuterVolumeSpecName: "scripts") pod "456de4fc-6251-4ab7-b211-c564642c6c82" (UID: "456de4fc-6251-4ab7-b211-c564642c6c82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:29.921734 master-1 kubenswrapper[4740]: I1014 13:35:29.921702 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456de4fc-6251-4ab7-b211-c564642c6c82-kube-api-access-kcn6g" (OuterVolumeSpecName: "kube-api-access-kcn6g") pod "456de4fc-6251-4ab7-b211-c564642c6c82" (UID: "456de4fc-6251-4ab7-b211-c564642c6c82"). InnerVolumeSpecName "kube-api-access-kcn6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:29.963531 master-1 kubenswrapper[4740]: I1014 13:35:29.963383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-26cmc-config-wxcn9" event={"ID":"456de4fc-6251-4ab7-b211-c564642c6c82","Type":"ContainerDied","Data":"3c4238f92921527814d576475304bcd79e47022289dd8d00a96eafcfb761b69d"} Oct 14 13:35:29.964366 master-1 kubenswrapper[4740]: I1014 13:35:29.963459 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4238f92921527814d576475304bcd79e47022289dd8d00a96eafcfb761b69d" Oct 14 13:35:29.964366 master-1 kubenswrapper[4740]: I1014 13:35:29.963421 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-26cmc-config-wxcn9" Oct 14 13:35:30.014166 master-1 kubenswrapper[4740]: I1014 13:35:30.013520 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:30.020909 master-1 kubenswrapper[4740]: I1014 13:35:30.020822 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-additional-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.020909 master-1 kubenswrapper[4740]: I1014 13:35:30.020886 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcn6g\" (UniqueName: \"kubernetes.io/projected/456de4fc-6251-4ab7-b211-c564642c6c82-kube-api-access-kcn6g\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.021096 master-1 kubenswrapper[4740]: I1014 13:35:30.020915 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/456de4fc-6251-4ab7-b211-c564642c6c82-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.425538 master-1 kubenswrapper[4740]: I1014 13:35:30.425467 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-26cmc" Oct 14 13:35:30.593916 master-1 kubenswrapper[4740]: I1014 13:35:30.593789 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9jggw"] Oct 14 13:35:30.651674 master-1 kubenswrapper[4740]: I1014 13:35:30.651643 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:30.834739 master-1 kubenswrapper[4740]: I1014 13:35:30.834667 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-dispersionconf\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.834926 master-1 kubenswrapper[4740]: I1014 13:35:30.834786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-combined-ca-bundle\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.835208 master-1 kubenswrapper[4740]: I1014 13:35:30.835165 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-scripts\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.835343 master-1 kubenswrapper[4740]: I1014 13:35:30.835248 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-swiftconf\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.835343 master-1 kubenswrapper[4740]: I1014 13:35:30.835337 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/05917721-13c9-4d5c-93a6-b00662018163-etc-swift\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.835553 master-1 kubenswrapper[4740]: I1014 13:35:30.835375 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpkxp\" (UniqueName: \"kubernetes.io/projected/05917721-13c9-4d5c-93a6-b00662018163-kube-api-access-cpkxp\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.835553 master-1 kubenswrapper[4740]: I1014 13:35:30.835438 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-ring-data-devices\") pod \"05917721-13c9-4d5c-93a6-b00662018163\" (UID: \"05917721-13c9-4d5c-93a6-b00662018163\") " Oct 14 13:35:30.836088 master-1 kubenswrapper[4740]: I1014 13:35:30.836030 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:30.836299 master-1 kubenswrapper[4740]: I1014 13:35:30.836137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05917721-13c9-4d5c-93a6-b00662018163-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:35:30.839542 master-1 kubenswrapper[4740]: I1014 13:35:30.839476 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05917721-13c9-4d5c-93a6-b00662018163-kube-api-access-cpkxp" (OuterVolumeSpecName: "kube-api-access-cpkxp") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "kube-api-access-cpkxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:30.847153 master-1 kubenswrapper[4740]: I1014 13:35:30.846977 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:30.853774 master-1 kubenswrapper[4740]: I1014 13:35:30.853710 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-scripts" (OuterVolumeSpecName: "scripts") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:30.868509 master-1 kubenswrapper[4740]: I1014 13:35:30.867131 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:30.886371 master-1 kubenswrapper[4740]: I1014 13:35:30.884466 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "05917721-13c9-4d5c-93a6-b00662018163" (UID: "05917721-13c9-4d5c-93a6-b00662018163"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:30.942780 master-1 kubenswrapper[4740]: I1014 13:35:30.942713 4740 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-dispersionconf\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.942780 master-1 kubenswrapper[4740]: I1014 13:35:30.942772 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.942780 master-1 kubenswrapper[4740]: I1014 13:35:30.942786 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.943084 master-1 kubenswrapper[4740]: I1014 13:35:30.942796 4740 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/05917721-13c9-4d5c-93a6-b00662018163-swiftconf\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.943084 master-1 kubenswrapper[4740]: I1014 13:35:30.942808 4740 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/05917721-13c9-4d5c-93a6-b00662018163-etc-swift\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.943084 master-1 kubenswrapper[4740]: I1014 13:35:30.942817 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpkxp\" (UniqueName: \"kubernetes.io/projected/05917721-13c9-4d5c-93a6-b00662018163-kube-api-access-cpkxp\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.943084 master-1 kubenswrapper[4740]: I1014 13:35:30.942827 4740 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/05917721-13c9-4d5c-93a6-b00662018163-ring-data-devices\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:30.976877 master-1 kubenswrapper[4740]: I1014 13:35:30.976805 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9jggw" event={"ID":"cc7697c8-a46f-40f0-ab6a-e02b46a7a832","Type":"ContainerStarted","Data":"e2bcf28fa5173e32513fb968032043ccd5d5c391a650841af519d43ddea80c60"} Oct 14 13:35:30.976877 master-1 kubenswrapper[4740]: I1014 13:35:30.976872 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9jggw" event={"ID":"cc7697c8-a46f-40f0-ab6a-e02b46a7a832","Type":"ContainerStarted","Data":"a928f55da6b4a086402cf78e9144fcde3c8557715dc9c566a2319717daa59272"} Oct 14 13:35:30.981077 master-1 kubenswrapper[4740]: I1014 13:35:30.981037 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5ftz8" event={"ID":"05917721-13c9-4d5c-93a6-b00662018163","Type":"ContainerDied","Data":"bbffabb7e7ff05bfb7adbb8806ee1d52a745478196967bd8de8c6e82f4889a41"} Oct 14 13:35:30.981077 master-1 kubenswrapper[4740]: I1014 13:35:30.981075 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbffabb7e7ff05bfb7adbb8806ee1d52a745478196967bd8de8c6e82f4889a41" Oct 14 13:35:30.981171 master-1 kubenswrapper[4740]: I1014 13:35:30.981149 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5ftz8" Oct 14 13:35:31.997073 master-1 kubenswrapper[4740]: I1014 13:35:31.996982 4740 generic.go:334] "Generic (PLEG): container finished" podID="cc7697c8-a46f-40f0-ab6a-e02b46a7a832" containerID="e2bcf28fa5173e32513fb968032043ccd5d5c391a650841af519d43ddea80c60" exitCode=0 Oct 14 13:35:31.997073 master-1 kubenswrapper[4740]: I1014 13:35:31.997079 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9jggw" event={"ID":"cc7697c8-a46f-40f0-ab6a-e02b46a7a832","Type":"ContainerDied","Data":"e2bcf28fa5173e32513fb968032043ccd5d5c391a650841af519d43ddea80c60"} Oct 14 13:35:32.992758 master-1 kubenswrapper[4740]: I1014 13:35:32.992689 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:33.003138 master-1 kubenswrapper[4740]: I1014 13:35:33.003077 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-26cmc-config-wxcn9"] Oct 14 13:35:33.006309 master-1 kubenswrapper[4740]: I1014 13:35:33.006220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9jggw" event={"ID":"cc7697c8-a46f-40f0-ab6a-e02b46a7a832","Type":"ContainerDied","Data":"a928f55da6b4a086402cf78e9144fcde3c8557715dc9c566a2319717daa59272"} Oct 14 13:35:33.006309 master-1 kubenswrapper[4740]: I1014 13:35:33.006305 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a928f55da6b4a086402cf78e9144fcde3c8557715dc9c566a2319717daa59272" Oct 14 13:35:33.006534 master-1 kubenswrapper[4740]: I1014 13:35:33.006337 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9jggw" Oct 14 13:35:33.089247 master-1 kubenswrapper[4740]: I1014 13:35:33.089133 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm5c\" (UniqueName: \"kubernetes.io/projected/cc7697c8-a46f-40f0-ab6a-e02b46a7a832-kube-api-access-fcm5c\") pod \"cc7697c8-a46f-40f0-ab6a-e02b46a7a832\" (UID: \"cc7697c8-a46f-40f0-ab6a-e02b46a7a832\") " Oct 14 13:35:33.096697 master-1 kubenswrapper[4740]: I1014 13:35:33.096568 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7697c8-a46f-40f0-ab6a-e02b46a7a832-kube-api-access-fcm5c" (OuterVolumeSpecName: "kube-api-access-fcm5c") pod "cc7697c8-a46f-40f0-ab6a-e02b46a7a832" (UID: "cc7697c8-a46f-40f0-ab6a-e02b46a7a832"). InnerVolumeSpecName "kube-api-access-fcm5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:33.102974 master-1 kubenswrapper[4740]: I1014 13:35:33.102903 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-26cmc-config-wxcn9"] Oct 14 13:35:33.193504 master-1 kubenswrapper[4740]: I1014 13:35:33.192774 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcm5c\" (UniqueName: \"kubernetes.io/projected/cc7697c8-a46f-40f0-ab6a-e02b46a7a832-kube-api-access-fcm5c\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:34.955247 master-1 kubenswrapper[4740]: I1014 13:35:34.955154 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456de4fc-6251-4ab7-b211-c564642c6c82" path="/var/lib/kubelet/pods/456de4fc-6251-4ab7-b211-c564642c6c82/volumes" Oct 14 13:35:36.267853 master-1 kubenswrapper[4740]: I1014 13:35:36.267806 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pxfvm"] Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: E1014 13:35:36.268213 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456de4fc-6251-4ab7-b211-c564642c6c82" containerName="ovn-config" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: I1014 13:35:36.268242 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="456de4fc-6251-4ab7-b211-c564642c6c82" containerName="ovn-config" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: E1014 13:35:36.268259 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05917721-13c9-4d5c-93a6-b00662018163" containerName="swift-ring-rebalance" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: I1014 13:35:36.268266 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="05917721-13c9-4d5c-93a6-b00662018163" containerName="swift-ring-rebalance" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: E1014 13:35:36.268314 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7697c8-a46f-40f0-ab6a-e02b46a7a832" containerName="mariadb-database-create" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: I1014 13:35:36.268321 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7697c8-a46f-40f0-ab6a-e02b46a7a832" containerName="mariadb-database-create" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: I1014 13:35:36.268472 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="05917721-13c9-4d5c-93a6-b00662018163" containerName="swift-ring-rebalance" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: I1014 13:35:36.268485 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="456de4fc-6251-4ab7-b211-c564642c6c82" containerName="ovn-config" Oct 14 13:35:36.268742 master-1 kubenswrapper[4740]: I1014 13:35:36.268503 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7697c8-a46f-40f0-ab6a-e02b46a7a832" containerName="mariadb-database-create" Oct 14 13:35:36.269148 master-1 kubenswrapper[4740]: I1014 13:35:36.269124 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.271627 master-1 kubenswrapper[4740]: I1014 13:35:36.271593 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 13:35:36.271762 master-1 kubenswrapper[4740]: I1014 13:35:36.271723 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 13:35:36.272515 master-1 kubenswrapper[4740]: I1014 13:35:36.272439 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 13:35:36.301418 master-1 kubenswrapper[4740]: I1014 13:35:36.301353 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pxfvm"] Oct 14 13:35:36.455013 master-1 kubenswrapper[4740]: I1014 13:35:36.454939 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-combined-ca-bundle\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.455348 master-1 kubenswrapper[4740]: I1014 13:35:36.455313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s4sl\" (UniqueName: \"kubernetes.io/projected/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-kube-api-access-6s4sl\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.455490 master-1 kubenswrapper[4740]: I1014 13:35:36.455457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-config-data\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.556901 master-1 kubenswrapper[4740]: I1014 13:35:36.556829 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-config-data\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.556901 master-1 kubenswrapper[4740]: I1014 13:35:36.556915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-combined-ca-bundle\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.557202 master-1 kubenswrapper[4740]: I1014 13:35:36.556982 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s4sl\" (UniqueName: \"kubernetes.io/projected/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-kube-api-access-6s4sl\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.560207 master-1 kubenswrapper[4740]: I1014 13:35:36.560166 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-config-data\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.560439 master-1 kubenswrapper[4740]: I1014 13:35:36.560400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-combined-ca-bundle\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.659951 master-1 kubenswrapper[4740]: I1014 13:35:36.659855 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s4sl\" (UniqueName: \"kubernetes.io/projected/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-kube-api-access-6s4sl\") pod \"keystone-db-sync-pxfvm\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:36.888647 master-1 kubenswrapper[4740]: I1014 13:35:36.886004 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:37.931675 master-1 kubenswrapper[4740]: I1014 13:35:37.931615 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pxfvm"] Oct 14 13:35:39.072917 master-1 kubenswrapper[4740]: I1014 13:35:39.070807 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxfvm" event={"ID":"14ead27a-a1bb-4c69-8ecb-b982d0ca526b","Type":"ContainerStarted","Data":"ada1a1fa6cf306e0e3ee9fd719d689120f184c92df6b8d8ce99e61bd33440b35"} Oct 14 13:35:43.117951 master-1 kubenswrapper[4740]: I1014 13:35:43.117801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxfvm" event={"ID":"14ead27a-a1bb-4c69-8ecb-b982d0ca526b","Type":"ContainerStarted","Data":"7baac481e755941c7afac5ebf22810288b8bee1a77644b515f33d648251687c1"} Oct 14 13:35:45.403922 master-1 kubenswrapper[4740]: I1014 13:35:45.403800 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pxfvm" podStartSLOduration=4.787729833 podStartE2EDuration="9.403773889s" podCreationTimestamp="2025-10-14 13:35:36 +0000 UTC" firstStartedPulling="2025-10-14 13:35:38.194430381 +0000 UTC m=+1764.004719720" lastFinishedPulling="2025-10-14 13:35:42.810474447 +0000 UTC m=+1768.620763776" observedRunningTime="2025-10-14 13:35:43.17651657 +0000 UTC m=+1768.986805899" watchObservedRunningTime="2025-10-14 13:35:45.403773889 +0000 UTC m=+1771.214063228" Oct 14 13:35:45.408394 master-1 kubenswrapper[4740]: I1014 13:35:45.407217 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-z669w"] Oct 14 13:35:45.410631 master-1 kubenswrapper[4740]: I1014 13:35:45.410551 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.414397 master-1 kubenswrapper[4740]: I1014 13:35:45.414259 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-46645-config-data" Oct 14 13:35:45.415087 master-1 kubenswrapper[4740]: I1014 13:35:45.415030 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z669w"] Oct 14 13:35:45.584021 master-1 kubenswrapper[4740]: I1014 13:35:45.583890 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76p2t\" (UniqueName: \"kubernetes.io/projected/28738a5a-94be-43a4-a55e-720365a4246b-kube-api-access-76p2t\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.584413 master-1 kubenswrapper[4740]: I1014 13:35:45.584387 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-db-sync-config-data\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.584921 master-1 kubenswrapper[4740]: I1014 13:35:45.584599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-combined-ca-bundle\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.585217 master-1 kubenswrapper[4740]: I1014 13:35:45.585199 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-config-data\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.687181 master-1 kubenswrapper[4740]: I1014 13:35:45.686996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-db-sync-config-data\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.687181 master-1 kubenswrapper[4740]: I1014 13:35:45.687069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-combined-ca-bundle\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.687181 master-1 kubenswrapper[4740]: I1014 13:35:45.687111 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-config-data\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.687560 master-1 kubenswrapper[4740]: I1014 13:35:45.687249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76p2t\" (UniqueName: \"kubernetes.io/projected/28738a5a-94be-43a4-a55e-720365a4246b-kube-api-access-76p2t\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.691065 master-1 kubenswrapper[4740]: I1014 13:35:45.691021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-config-data\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.692915 master-1 kubenswrapper[4740]: I1014 13:35:45.692818 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-combined-ca-bundle\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.695798 master-1 kubenswrapper[4740]: I1014 13:35:45.695604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-db-sync-config-data\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.718868 master-1 kubenswrapper[4740]: I1014 13:35:45.718813 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76p2t\" (UniqueName: \"kubernetes.io/projected/28738a5a-94be-43a4-a55e-720365a4246b-kube-api-access-76p2t\") pod \"glance-db-sync-z669w\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " pod="openstack/glance-db-sync-z669w" Oct 14 13:35:45.758139 master-1 kubenswrapper[4740]: I1014 13:35:45.758080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z669w" Oct 14 13:35:46.350083 master-1 kubenswrapper[4740]: I1014 13:35:46.350015 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z669w"] Oct 14 13:35:46.356291 master-1 kubenswrapper[4740]: W1014 13:35:46.356197 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28738a5a_94be_43a4_a55e_720365a4246b.slice/crio-bb36725723f7926e6fc1a5b5457566ac9acd8f810e9b70628fccd577f06c8180 WatchSource:0}: Error finding container bb36725723f7926e6fc1a5b5457566ac9acd8f810e9b70628fccd577f06c8180: Status 404 returned error can't find the container with id bb36725723f7926e6fc1a5b5457566ac9acd8f810e9b70628fccd577f06c8180 Oct 14 13:35:47.175021 master-1 kubenswrapper[4740]: I1014 13:35:47.174954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z669w" event={"ID":"28738a5a-94be-43a4-a55e-720365a4246b","Type":"ContainerStarted","Data":"bb36725723f7926e6fc1a5b5457566ac9acd8f810e9b70628fccd577f06c8180"} Oct 14 13:35:48.191932 master-1 kubenswrapper[4740]: I1014 13:35:48.191763 4740 generic.go:334] "Generic (PLEG): container finished" podID="14ead27a-a1bb-4c69-8ecb-b982d0ca526b" containerID="7baac481e755941c7afac5ebf22810288b8bee1a77644b515f33d648251687c1" exitCode=0 Oct 14 13:35:48.191932 master-1 kubenswrapper[4740]: I1014 13:35:48.191846 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxfvm" event={"ID":"14ead27a-a1bb-4c69-8ecb-b982d0ca526b","Type":"ContainerDied","Data":"7baac481e755941c7afac5ebf22810288b8bee1a77644b515f33d648251687c1"} Oct 14 13:35:49.931520 master-1 kubenswrapper[4740]: I1014 13:35:49.931407 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:50.086147 master-1 kubenswrapper[4740]: I1014 13:35:50.084088 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-config-data\") pod \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " Oct 14 13:35:50.086147 master-1 kubenswrapper[4740]: I1014 13:35:50.084310 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-combined-ca-bundle\") pod \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " Oct 14 13:35:50.086147 master-1 kubenswrapper[4740]: I1014 13:35:50.084427 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s4sl\" (UniqueName: \"kubernetes.io/projected/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-kube-api-access-6s4sl\") pod \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\" (UID: \"14ead27a-a1bb-4c69-8ecb-b982d0ca526b\") " Oct 14 13:35:50.092992 master-1 kubenswrapper[4740]: I1014 13:35:50.092702 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-kube-api-access-6s4sl" (OuterVolumeSpecName: "kube-api-access-6s4sl") pod "14ead27a-a1bb-4c69-8ecb-b982d0ca526b" (UID: "14ead27a-a1bb-4c69-8ecb-b982d0ca526b"). InnerVolumeSpecName "kube-api-access-6s4sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:50.110495 master-1 kubenswrapper[4740]: I1014 13:35:50.110427 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14ead27a-a1bb-4c69-8ecb-b982d0ca526b" (UID: "14ead27a-a1bb-4c69-8ecb-b982d0ca526b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:50.138780 master-1 kubenswrapper[4740]: I1014 13:35:50.138704 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-config-data" (OuterVolumeSpecName: "config-data") pod "14ead27a-a1bb-4c69-8ecb-b982d0ca526b" (UID: "14ead27a-a1bb-4c69-8ecb-b982d0ca526b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:50.186818 master-1 kubenswrapper[4740]: I1014 13:35:50.186763 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:50.186818 master-1 kubenswrapper[4740]: I1014 13:35:50.186812 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:50.186992 master-1 kubenswrapper[4740]: I1014 13:35:50.186827 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s4sl\" (UniqueName: \"kubernetes.io/projected/14ead27a-a1bb-4c69-8ecb-b982d0ca526b-kube-api-access-6s4sl\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:50.210186 master-1 kubenswrapper[4740]: I1014 13:35:50.210131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxfvm" event={"ID":"14ead27a-a1bb-4c69-8ecb-b982d0ca526b","Type":"ContainerDied","Data":"ada1a1fa6cf306e0e3ee9fd719d689120f184c92df6b8d8ce99e61bd33440b35"} Oct 14 13:35:50.210425 master-1 kubenswrapper[4740]: I1014 13:35:50.210262 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada1a1fa6cf306e0e3ee9fd719d689120f184c92df6b8d8ce99e61bd33440b35" Oct 14 13:35:50.210425 master-1 kubenswrapper[4740]: I1014 13:35:50.210176 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxfvm" Oct 14 13:35:50.310339 master-1 kubenswrapper[4740]: I1014 13:35:50.310244 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ff48c4bf5-pkm9g"] Oct 14 13:35:50.310817 master-1 kubenswrapper[4740]: E1014 13:35:50.310587 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ead27a-a1bb-4c69-8ecb-b982d0ca526b" containerName="keystone-db-sync" Oct 14 13:35:50.310817 master-1 kubenswrapper[4740]: I1014 13:35:50.310605 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ead27a-a1bb-4c69-8ecb-b982d0ca526b" containerName="keystone-db-sync" Oct 14 13:35:50.310817 master-1 kubenswrapper[4740]: I1014 13:35:50.310782 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ead27a-a1bb-4c69-8ecb-b982d0ca526b" containerName="keystone-db-sync" Oct 14 13:35:50.311874 master-1 kubenswrapper[4740]: I1014 13:35:50.311834 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.316441 master-1 kubenswrapper[4740]: I1014 13:35:50.316387 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 13:35:50.316566 master-1 kubenswrapper[4740]: I1014 13:35:50.316476 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 13:35:50.316566 master-1 kubenswrapper[4740]: I1014 13:35:50.316401 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 14 13:35:50.316663 master-1 kubenswrapper[4740]: I1014 13:35:50.316546 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 13:35:50.316663 master-1 kubenswrapper[4740]: I1014 13:35:50.316603 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 13:35:50.330658 master-1 kubenswrapper[4740]: I1014 13:35:50.330593 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff48c4bf5-pkm9g"] Oct 14 13:35:50.497491 master-1 kubenswrapper[4740]: I1014 13:35:50.497384 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-swift-storage-0\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.497491 master-1 kubenswrapper[4740]: I1014 13:35:50.497469 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-nb\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.498100 master-1 kubenswrapper[4740]: I1014 13:35:50.497571 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/277a73df-57b8-4d49-81ac-86b1167d0132-kube-api-access-drmfv\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.498100 master-1 kubenswrapper[4740]: I1014 13:35:50.497726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-sb\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.498100 master-1 kubenswrapper[4740]: I1014 13:35:50.497801 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-svc\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.498100 master-1 kubenswrapper[4740]: I1014 13:35:50.498011 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-config\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.600651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-swift-storage-0\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.600724 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-nb\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.600855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/277a73df-57b8-4d49-81ac-86b1167d0132-kube-api-access-drmfv\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.600926 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-sb\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.601119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-svc\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.601193 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-config\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.601879 master-1 kubenswrapper[4740]: I1014 13:35:50.601796 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-nb\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.602817 master-1 kubenswrapper[4740]: I1014 13:35:50.602368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-svc\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.602817 master-1 kubenswrapper[4740]: I1014 13:35:50.602413 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-sb\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.602817 master-1 kubenswrapper[4740]: I1014 13:35:50.602425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-swift-storage-0\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.602968 master-1 kubenswrapper[4740]: I1014 13:35:50.602859 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-config\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.666679 master-1 kubenswrapper[4740]: I1014 13:35:50.666614 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/277a73df-57b8-4d49-81ac-86b1167d0132-kube-api-access-drmfv\") pod \"dnsmasq-dns-ff48c4bf5-pkm9g\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:50.779713 master-1 kubenswrapper[4740]: I1014 13:35:50.779666 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff48c4bf5-pkm9g"] Oct 14 13:35:50.780955 master-1 kubenswrapper[4740]: I1014 13:35:50.780920 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:51.015283 master-1 kubenswrapper[4740]: I1014 13:35:51.012506 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-b28pf"] Oct 14 13:35:51.018544 master-1 kubenswrapper[4740]: I1014 13:35:51.018504 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.022327 master-1 kubenswrapper[4740]: I1014 13:35:51.022289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 14 13:35:51.022489 master-1 kubenswrapper[4740]: I1014 13:35:51.022468 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-b28pf"] Oct 14 13:35:51.111108 master-1 kubenswrapper[4740]: I1014 13:35:51.111035 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-combined-ca-bundle\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.111357 master-1 kubenswrapper[4740]: I1014 13:35:51.111164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-config-data\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.111408 master-1 kubenswrapper[4740]: I1014 13:35:51.111385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjm2l\" (UniqueName: \"kubernetes.io/projected/e3f31b4a-3d7a-4274-befd-82f1bc035e07-kube-api-access-hjm2l\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.215534 master-1 kubenswrapper[4740]: I1014 13:35:51.213695 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjm2l\" (UniqueName: \"kubernetes.io/projected/e3f31b4a-3d7a-4274-befd-82f1bc035e07-kube-api-access-hjm2l\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.215534 master-1 kubenswrapper[4740]: I1014 13:35:51.213780 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-combined-ca-bundle\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.215534 master-1 kubenswrapper[4740]: I1014 13:35:51.213808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-config-data\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.218287 master-1 kubenswrapper[4740]: I1014 13:35:51.218248 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-combined-ca-bundle\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.225029 master-1 kubenswrapper[4740]: I1014 13:35:51.224982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-config-data\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.289487 master-1 kubenswrapper[4740]: I1014 13:35:51.287443 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjm2l\" (UniqueName: \"kubernetes.io/projected/e3f31b4a-3d7a-4274-befd-82f1bc035e07-kube-api-access-hjm2l\") pod \"heat-db-sync-b28pf\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.323769 master-1 kubenswrapper[4740]: I1014 13:35:51.318851 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bc7jg"] Oct 14 13:35:51.323998 master-1 kubenswrapper[4740]: I1014 13:35:51.323933 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.328517 master-1 kubenswrapper[4740]: I1014 13:35:51.327885 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bc7jg"] Oct 14 13:35:51.332261 master-1 kubenswrapper[4740]: I1014 13:35:51.330546 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 13:35:51.332261 master-1 kubenswrapper[4740]: I1014 13:35:51.331853 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 13:35:51.348680 master-1 kubenswrapper[4740]: I1014 13:35:51.348600 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-b28pf" Oct 14 13:35:51.425421 master-1 kubenswrapper[4740]: I1014 13:35:51.424810 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-combined-ca-bundle\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.425421 master-1 kubenswrapper[4740]: I1014 13:35:51.424883 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcsd8\" (UniqueName: \"kubernetes.io/projected/07974c63-665d-43bd-a568-286d26004725-kube-api-access-kcsd8\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.425421 master-1 kubenswrapper[4740]: I1014 13:35:51.424920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-config\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.447409 master-1 kubenswrapper[4740]: I1014 13:35:51.447335 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff48c4bf5-pkm9g"] Oct 14 13:35:51.479741 master-1 kubenswrapper[4740]: I1014 13:35:51.479684 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-sx22g"] Oct 14 13:35:51.482008 master-1 kubenswrapper[4740]: I1014 13:35:51.480878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.484595 master-1 kubenswrapper[4740]: I1014 13:35:51.484375 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 13:35:51.484595 master-1 kubenswrapper[4740]: I1014 13:35:51.484461 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 13:35:51.505114 master-1 kubenswrapper[4740]: I1014 13:35:51.504260 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-sx22g"] Oct 14 13:35:51.537303 master-1 kubenswrapper[4740]: I1014 13:35:51.537244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-combined-ca-bundle\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.537532 master-1 kubenswrapper[4740]: I1014 13:35:51.537342 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcsd8\" (UniqueName: \"kubernetes.io/projected/07974c63-665d-43bd-a568-286d26004725-kube-api-access-kcsd8\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.537532 master-1 kubenswrapper[4740]: I1014 13:35:51.537395 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-config\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.545190 master-1 kubenswrapper[4740]: I1014 13:35:51.545067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-config\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.549362 master-1 kubenswrapper[4740]: I1014 13:35:51.549298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-combined-ca-bundle\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.642847 master-1 kubenswrapper[4740]: I1014 13:35:51.642787 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-config-data\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.642847 master-1 kubenswrapper[4740]: I1014 13:35:51.642832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5zq\" (UniqueName: \"kubernetes.io/projected/de58ce43-1433-46b0-9f48-d8add8324fe5-kube-api-access-6s5zq\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.646934 master-1 kubenswrapper[4740]: I1014 13:35:51.646735 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-scripts\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.646934 master-1 kubenswrapper[4740]: I1014 13:35:51.646811 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-combined-ca-bundle\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.647981 master-1 kubenswrapper[4740]: I1014 13:35:51.647927 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de58ce43-1433-46b0-9f48-d8add8324fe5-logs\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.726579 master-1 kubenswrapper[4740]: I1014 13:35:51.726532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcsd8\" (UniqueName: \"kubernetes.io/projected/07974c63-665d-43bd-a568-286d26004725-kube-api-access-kcsd8\") pod \"neutron-db-sync-bc7jg\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.749969 master-1 kubenswrapper[4740]: I1014 13:35:51.749901 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de58ce43-1433-46b0-9f48-d8add8324fe5-logs\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.749969 master-1 kubenswrapper[4740]: I1014 13:35:51.749969 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-config-data\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.750187 master-1 kubenswrapper[4740]: I1014 13:35:51.749987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s5zq\" (UniqueName: \"kubernetes.io/projected/de58ce43-1433-46b0-9f48-d8add8324fe5-kube-api-access-6s5zq\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.750187 master-1 kubenswrapper[4740]: I1014 13:35:51.750013 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-scripts\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.750187 master-1 kubenswrapper[4740]: I1014 13:35:51.750029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-combined-ca-bundle\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.752415 master-1 kubenswrapper[4740]: I1014 13:35:51.752363 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de58ce43-1433-46b0-9f48-d8add8324fe5-logs\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.755657 master-1 kubenswrapper[4740]: I1014 13:35:51.755610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-config-data\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.755742 master-1 kubenswrapper[4740]: I1014 13:35:51.755644 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-combined-ca-bundle\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.762513 master-1 kubenswrapper[4740]: I1014 13:35:51.762448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-scripts\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.800028 master-1 kubenswrapper[4740]: I1014 13:35:51.799882 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-787cbbf4dc-666ws"] Oct 14 13:35:51.801476 master-1 kubenswrapper[4740]: I1014 13:35:51.801443 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.812294 master-1 kubenswrapper[4740]: I1014 13:35:51.811635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s5zq\" (UniqueName: \"kubernetes.io/projected/de58ce43-1433-46b0-9f48-d8add8324fe5-kube-api-access-6s5zq\") pod \"placement-db-sync-sx22g\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.823653 master-1 kubenswrapper[4740]: I1014 13:35:51.823590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-787cbbf4dc-666ws"] Oct 14 13:35:51.867317 master-1 kubenswrapper[4740]: I1014 13:35:51.867270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sx22g" Oct 14 13:35:51.944182 master-1 kubenswrapper[4740]: I1014 13:35:51.944094 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:35:51.951190 master-1 kubenswrapper[4740]: I1014 13:35:51.951122 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-b28pf"] Oct 14 13:35:51.955247 master-1 kubenswrapper[4740]: I1014 13:35:51.955149 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-swift-storage-0\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.955389 master-1 kubenswrapper[4740]: I1014 13:35:51.955295 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssxjd\" (UniqueName: \"kubernetes.io/projected/4864df54-8895-424b-85df-f8ce3bc5001e-kube-api-access-ssxjd\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.955602 master-1 kubenswrapper[4740]: I1014 13:35:51.955487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-config\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.955602 master-1 kubenswrapper[4740]: I1014 13:35:51.955581 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-svc\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.955868 master-1 kubenswrapper[4740]: I1014 13:35:51.955832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-nb\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.956372 master-1 kubenswrapper[4740]: I1014 13:35:51.956325 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-sb\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:51.967059 master-1 kubenswrapper[4740]: W1014 13:35:51.966933 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3f31b4a_3d7a_4274_befd_82f1bc035e07.slice/crio-b83e1b2cd71b2c2e01416cae88c4656a765374de56166c82dfd4c610b06c8973 WatchSource:0}: Error finding container b83e1b2cd71b2c2e01416cae88c4656a765374de56166c82dfd4c610b06c8973: Status 404 returned error can't find the container with id b83e1b2cd71b2c2e01416cae88c4656a765374de56166c82dfd4c610b06c8973 Oct 14 13:35:52.056292 master-1 kubenswrapper[4740]: I1014 13:35:52.056155 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hd9hz"] Oct 14 13:35:52.062285 master-1 kubenswrapper[4740]: I1014 13:35:52.060731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-sb\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.062285 master-1 kubenswrapper[4740]: I1014 13:35:52.060811 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-swift-storage-0\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.062285 master-1 kubenswrapper[4740]: I1014 13:35:52.060884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssxjd\" (UniqueName: \"kubernetes.io/projected/4864df54-8895-424b-85df-f8ce3bc5001e-kube-api-access-ssxjd\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.062285 master-1 kubenswrapper[4740]: I1014 13:35:52.060913 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-config\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.062285 master-1 kubenswrapper[4740]: I1014 13:35:52.060954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-svc\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.062285 master-1 kubenswrapper[4740]: I1014 13:35:52.061033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-nb\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.062955 master-1 kubenswrapper[4740]: I1014 13:35:52.062676 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-nb\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.063945 master-1 kubenswrapper[4740]: I1014 13:35:52.063887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-config\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.064220 master-1 kubenswrapper[4740]: I1014 13:35:52.064184 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-swift-storage-0\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.064411 master-1 kubenswrapper[4740]: I1014 13:35:52.064379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-svc\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.065853 master-1 kubenswrapper[4740]: I1014 13:35:52.065804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-sb\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.070377 master-1 kubenswrapper[4740]: I1014 13:35:52.070329 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.073770 master-1 kubenswrapper[4740]: I1014 13:35:52.073305 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 13:35:52.077071 master-1 kubenswrapper[4740]: I1014 13:35:52.076963 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hd9hz"] Oct 14 13:35:52.228205 master-1 kubenswrapper[4740]: I1014 13:35:52.228168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssxjd\" (UniqueName: \"kubernetes.io/projected/4864df54-8895-424b-85df-f8ce3bc5001e-kube-api-access-ssxjd\") pod \"dnsmasq-dns-787cbbf4dc-666ws\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.265042 master-1 kubenswrapper[4740]: I1014 13:35:52.264924 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-db-sync-config-data\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.265923 master-1 kubenswrapper[4740]: I1014 13:35:52.265087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-combined-ca-bundle\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.265923 master-1 kubenswrapper[4740]: I1014 13:35:52.265129 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvgwc\" (UniqueName: \"kubernetes.io/projected/3314e007-8945-436e-b5bb-7a7d9bf583ba-kube-api-access-xvgwc\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.267341 master-1 kubenswrapper[4740]: I1014 13:35:52.267287 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-b28pf" event={"ID":"e3f31b4a-3d7a-4274-befd-82f1bc035e07","Type":"ContainerStarted","Data":"b83e1b2cd71b2c2e01416cae88c4656a765374de56166c82dfd4c610b06c8973"} Oct 14 13:35:52.270047 master-1 kubenswrapper[4740]: I1014 13:35:52.270012 4740 generic.go:334] "Generic (PLEG): container finished" podID="277a73df-57b8-4d49-81ac-86b1167d0132" containerID="c050b70946c7079646e6d34f64cbc8e100f527414c18056c34be4627fc97aa3a" exitCode=0 Oct 14 13:35:52.270179 master-1 kubenswrapper[4740]: I1014 13:35:52.270071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" event={"ID":"277a73df-57b8-4d49-81ac-86b1167d0132","Type":"ContainerDied","Data":"c050b70946c7079646e6d34f64cbc8e100f527414c18056c34be4627fc97aa3a"} Oct 14 13:35:52.270179 master-1 kubenswrapper[4740]: I1014 13:35:52.270099 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" event={"ID":"277a73df-57b8-4d49-81ac-86b1167d0132","Type":"ContainerStarted","Data":"c90f91c99086cc5bcd740aee33d6ee1622dc77e212de0294300c707a0d669163"} Oct 14 13:35:52.367649 master-1 kubenswrapper[4740]: I1014 13:35:52.367498 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-combined-ca-bundle\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.367649 master-1 kubenswrapper[4740]: I1014 13:35:52.367557 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvgwc\" (UniqueName: \"kubernetes.io/projected/3314e007-8945-436e-b5bb-7a7d9bf583ba-kube-api-access-xvgwc\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.367649 master-1 kubenswrapper[4740]: I1014 13:35:52.367651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-db-sync-config-data\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.371946 master-1 kubenswrapper[4740]: I1014 13:35:52.371514 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-db-sync-config-data\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.371946 master-1 kubenswrapper[4740]: I1014 13:35:52.371578 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-combined-ca-bundle\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.395531 master-1 kubenswrapper[4740]: I1014 13:35:52.392845 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvgwc\" (UniqueName: \"kubernetes.io/projected/3314e007-8945-436e-b5bb-7a7d9bf583ba-kube-api-access-xvgwc\") pod \"barbican-db-sync-hd9hz\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.412406 master-1 kubenswrapper[4740]: W1014 13:35:52.412352 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde58ce43_1433_46b0_9f48_d8add8324fe5.slice/crio-0e0eca49d664039761bc47e584901085cae299a55d51cbdcc2381b040eff8c84 WatchSource:0}: Error finding container 0e0eca49d664039761bc47e584901085cae299a55d51cbdcc2381b040eff8c84: Status 404 returned error can't find the container with id 0e0eca49d664039761bc47e584901085cae299a55d51cbdcc2381b040eff8c84 Oct 14 13:35:52.418826 master-1 kubenswrapper[4740]: I1014 13:35:52.418748 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-sx22g"] Oct 14 13:35:52.463900 master-1 kubenswrapper[4740]: I1014 13:35:52.463847 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:35:52.693317 master-1 kubenswrapper[4740]: I1014 13:35:52.692637 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:35:52.844304 master-1 kubenswrapper[4740]: I1014 13:35:52.844255 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bc7jg"] Oct 14 13:35:52.863318 master-1 kubenswrapper[4740]: W1014 13:35:52.863142 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07974c63_665d_43bd_a568_286d26004725.slice/crio-3d78b76138f77b02d3947dba66d614198e974f4b99c6cd501c2b3bf998508e18 WatchSource:0}: Error finding container 3d78b76138f77b02d3947dba66d614198e974f4b99c6cd501c2b3bf998508e18: Status 404 returned error can't find the container with id 3d78b76138f77b02d3947dba66d614198e974f4b99c6cd501c2b3bf998508e18 Oct 14 13:35:53.127573 master-1 kubenswrapper[4740]: I1014 13:35:53.126554 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:53.216308 master-1 kubenswrapper[4740]: I1014 13:35:53.216130 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/277a73df-57b8-4d49-81ac-86b1167d0132-kube-api-access-drmfv\") pod \"277a73df-57b8-4d49-81ac-86b1167d0132\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " Oct 14 13:35:53.216308 master-1 kubenswrapper[4740]: I1014 13:35:53.216246 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-nb\") pod \"277a73df-57b8-4d49-81ac-86b1167d0132\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " Oct 14 13:35:53.220318 master-1 kubenswrapper[4740]: I1014 13:35:53.220250 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277a73df-57b8-4d49-81ac-86b1167d0132-kube-api-access-drmfv" (OuterVolumeSpecName: "kube-api-access-drmfv") pod "277a73df-57b8-4d49-81ac-86b1167d0132" (UID: "277a73df-57b8-4d49-81ac-86b1167d0132"). InnerVolumeSpecName "kube-api-access-drmfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:53.249420 master-1 kubenswrapper[4740]: I1014 13:35:53.249313 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "277a73df-57b8-4d49-81ac-86b1167d0132" (UID: "277a73df-57b8-4d49-81ac-86b1167d0132"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:53.285319 master-1 kubenswrapper[4740]: I1014 13:35:53.285257 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sx22g" event={"ID":"de58ce43-1433-46b0-9f48-d8add8324fe5","Type":"ContainerStarted","Data":"0e0eca49d664039761bc47e584901085cae299a55d51cbdcc2381b040eff8c84"} Oct 14 13:35:53.286983 master-1 kubenswrapper[4740]: I1014 13:35:53.286935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bc7jg" event={"ID":"07974c63-665d-43bd-a568-286d26004725","Type":"ContainerStarted","Data":"75e10b515b7197d9698e3991f1054c359ae157c60822b216a693d51035babca0"} Oct 14 13:35:53.286983 master-1 kubenswrapper[4740]: I1014 13:35:53.286958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bc7jg" event={"ID":"07974c63-665d-43bd-a568-286d26004725","Type":"ContainerStarted","Data":"3d78b76138f77b02d3947dba66d614198e974f4b99c6cd501c2b3bf998508e18"} Oct 14 13:35:53.294672 master-1 kubenswrapper[4740]: I1014 13:35:53.294612 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" event={"ID":"277a73df-57b8-4d49-81ac-86b1167d0132","Type":"ContainerDied","Data":"c90f91c99086cc5bcd740aee33d6ee1622dc77e212de0294300c707a0d669163"} Oct 14 13:35:53.294845 master-1 kubenswrapper[4740]: I1014 13:35:53.294668 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff48c4bf5-pkm9g" Oct 14 13:35:53.295070 master-1 kubenswrapper[4740]: I1014 13:35:53.295005 4740 scope.go:117] "RemoveContainer" containerID="c050b70946c7079646e6d34f64cbc8e100f527414c18056c34be4627fc97aa3a" Oct 14 13:35:53.299987 master-1 kubenswrapper[4740]: I1014 13:35:53.299915 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-787cbbf4dc-666ws"] Oct 14 13:35:53.317634 master-1 kubenswrapper[4740]: I1014 13:35:53.317498 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-sb\") pod \"277a73df-57b8-4d49-81ac-86b1167d0132\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " Oct 14 13:35:53.317792 master-1 kubenswrapper[4740]: I1014 13:35:53.317754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-svc\") pod \"277a73df-57b8-4d49-81ac-86b1167d0132\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " Oct 14 13:35:53.317841 master-1 kubenswrapper[4740]: I1014 13:35:53.317806 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-config\") pod \"277a73df-57b8-4d49-81ac-86b1167d0132\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " Oct 14 13:35:53.317889 master-1 kubenswrapper[4740]: I1014 13:35:53.317834 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-swift-storage-0\") pod \"277a73df-57b8-4d49-81ac-86b1167d0132\" (UID: \"277a73df-57b8-4d49-81ac-86b1167d0132\") " Oct 14 13:35:53.318742 master-1 kubenswrapper[4740]: I1014 13:35:53.318695 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drmfv\" (UniqueName: \"kubernetes.io/projected/277a73df-57b8-4d49-81ac-86b1167d0132-kube-api-access-drmfv\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:53.318884 master-1 kubenswrapper[4740]: I1014 13:35:53.318855 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:53.340003 master-1 kubenswrapper[4740]: I1014 13:35:53.339942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "277a73df-57b8-4d49-81ac-86b1167d0132" (UID: "277a73df-57b8-4d49-81ac-86b1167d0132"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:53.342863 master-1 kubenswrapper[4740]: I1014 13:35:53.342782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "277a73df-57b8-4d49-81ac-86b1167d0132" (UID: "277a73df-57b8-4d49-81ac-86b1167d0132"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:53.352602 master-1 kubenswrapper[4740]: I1014 13:35:53.352540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "277a73df-57b8-4d49-81ac-86b1167d0132" (UID: "277a73df-57b8-4d49-81ac-86b1167d0132"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:53.360735 master-1 kubenswrapper[4740]: I1014 13:35:53.360606 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-config" (OuterVolumeSpecName: "config") pod "277a73df-57b8-4d49-81ac-86b1167d0132" (UID: "277a73df-57b8-4d49-81ac-86b1167d0132"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:53.420864 master-1 kubenswrapper[4740]: I1014 13:35:53.420753 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:53.420864 master-1 kubenswrapper[4740]: I1014 13:35:53.420876 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:53.421217 master-1 kubenswrapper[4740]: I1014 13:35:53.420897 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:53.421217 master-1 kubenswrapper[4740]: I1014 13:35:53.420912 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/277a73df-57b8-4d49-81ac-86b1167d0132-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:35:53.450979 master-1 kubenswrapper[4740]: I1014 13:35:53.450903 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hd9hz"] Oct 14 13:35:53.666433 master-1 kubenswrapper[4740]: I1014 13:35:53.666344 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bc7jg" podStartSLOduration=2.666314849 podStartE2EDuration="2.666314849s" podCreationTimestamp="2025-10-14 13:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:53.663666098 +0000 UTC m=+1779.473955427" watchObservedRunningTime="2025-10-14 13:35:53.666314849 +0000 UTC m=+1779.476604178" Oct 14 13:35:53.992356 master-1 kubenswrapper[4740]: I1014 13:35:53.991966 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff48c4bf5-pkm9g"] Oct 14 13:35:54.078599 master-1 kubenswrapper[4740]: I1014 13:35:54.078504 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ff48c4bf5-pkm9g"] Oct 14 13:35:54.304906 master-1 kubenswrapper[4740]: I1014 13:35:54.304848 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" event={"ID":"4864df54-8895-424b-85df-f8ce3bc5001e","Type":"ContainerStarted","Data":"01a026a755bde627c0aec104e4db08c55e60b381fbc33fea085e51e1d516fd45"} Oct 14 13:35:54.954139 master-1 kubenswrapper[4740]: I1014 13:35:54.954088 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277a73df-57b8-4d49-81ac-86b1167d0132" path="/var/lib/kubelet/pods/277a73df-57b8-4d49-81ac-86b1167d0132/volumes" Oct 14 13:35:55.358842 master-1 kubenswrapper[4740]: I1014 13:35:55.358777 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-db-sync-bn4lj"] Oct 14 13:35:55.359406 master-1 kubenswrapper[4740]: E1014 13:35:55.359159 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277a73df-57b8-4d49-81ac-86b1167d0132" containerName="init" Oct 14 13:35:55.359406 master-1 kubenswrapper[4740]: I1014 13:35:55.359173 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="277a73df-57b8-4d49-81ac-86b1167d0132" containerName="init" Oct 14 13:35:55.359406 master-1 kubenswrapper[4740]: I1014 13:35:55.359367 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="277a73df-57b8-4d49-81ac-86b1167d0132" containerName="init" Oct 14 13:35:55.360094 master-1 kubenswrapper[4740]: I1014 13:35:55.360032 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.363589 master-1 kubenswrapper[4740]: I1014 13:35:55.363352 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-config-data" Oct 14 13:35:55.363589 master-1 kubenswrapper[4740]: I1014 13:35:55.363521 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-scripts" Oct 14 13:35:55.386447 master-1 kubenswrapper[4740]: I1014 13:35:55.386403 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-db-sync-bn4lj"] Oct 14 13:35:55.476094 master-1 kubenswrapper[4740]: I1014 13:35:55.476020 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-scripts\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.476403 master-1 kubenswrapper[4740]: I1014 13:35:55.476110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-db-sync-config-data\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.476403 master-1 kubenswrapper[4740]: I1014 13:35:55.476178 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gt2\" (UniqueName: \"kubernetes.io/projected/97045127-d8fb-49d6-8a81-816517ba472d-kube-api-access-r9gt2\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.476403 master-1 kubenswrapper[4740]: I1014 13:35:55.476212 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-combined-ca-bundle\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.476403 master-1 kubenswrapper[4740]: I1014 13:35:55.476255 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97045127-d8fb-49d6-8a81-816517ba472d-etc-machine-id\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.476403 master-1 kubenswrapper[4740]: I1014 13:35:55.476271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-config-data\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.578314 master-1 kubenswrapper[4740]: I1014 13:35:55.578201 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-db-sync-config-data\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.579492 master-1 kubenswrapper[4740]: I1014 13:35:55.578702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gt2\" (UniqueName: \"kubernetes.io/projected/97045127-d8fb-49d6-8a81-816517ba472d-kube-api-access-r9gt2\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.579492 master-1 kubenswrapper[4740]: I1014 13:35:55.578754 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-combined-ca-bundle\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.579492 master-1 kubenswrapper[4740]: I1014 13:35:55.578773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97045127-d8fb-49d6-8a81-816517ba472d-etc-machine-id\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.579492 master-1 kubenswrapper[4740]: I1014 13:35:55.578793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-config-data\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.579492 master-1 kubenswrapper[4740]: I1014 13:35:55.578820 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-scripts\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.579492 master-1 kubenswrapper[4740]: I1014 13:35:55.579259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97045127-d8fb-49d6-8a81-816517ba472d-etc-machine-id\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.584029 master-1 kubenswrapper[4740]: I1014 13:35:55.583849 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-scripts\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.585404 master-1 kubenswrapper[4740]: I1014 13:35:55.585356 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-config-data\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.585474 master-1 kubenswrapper[4740]: I1014 13:35:55.585449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-db-sync-config-data\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.589616 master-1 kubenswrapper[4740]: I1014 13:35:55.589529 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-combined-ca-bundle\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.610414 master-1 kubenswrapper[4740]: I1014 13:35:55.610301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gt2\" (UniqueName: \"kubernetes.io/projected/97045127-d8fb-49d6-8a81-816517ba472d-kube-api-access-r9gt2\") pod \"cinder-46645-db-sync-bn4lj\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:35:55.674103 master-1 kubenswrapper[4740]: I1014 13:35:55.674031 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:36:01.011154 master-1 kubenswrapper[4740]: W1014 13:36:01.011087 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3314e007_8945_436e_b5bb_7a7d9bf583ba.slice/crio-2a22f02c55e823f6fb9ccf03f0af27f1369cc17d4a93e7f315883fb235c19ed2 WatchSource:0}: Error finding container 2a22f02c55e823f6fb9ccf03f0af27f1369cc17d4a93e7f315883fb235c19ed2: Status 404 returned error can't find the container with id 2a22f02c55e823f6fb9ccf03f0af27f1369cc17d4a93e7f315883fb235c19ed2 Oct 14 13:36:01.370353 master-1 kubenswrapper[4740]: I1014 13:36:01.370187 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerStarted","Data":"2a22f02c55e823f6fb9ccf03f0af27f1369cc17d4a93e7f315883fb235c19ed2"} Oct 14 13:36:02.977688 master-1 kubenswrapper[4740]: W1014 13:36:02.977635 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97045127_d8fb_49d6_8a81_816517ba472d.slice/crio-f3326807baeefd0e4f8c22e6a3aa65d5479d449bcba1f1bab0662636b7f72794 WatchSource:0}: Error finding container f3326807baeefd0e4f8c22e6a3aa65d5479d449bcba1f1bab0662636b7f72794: Status 404 returned error can't find the container with id f3326807baeefd0e4f8c22e6a3aa65d5479d449bcba1f1bab0662636b7f72794 Oct 14 13:36:02.984920 master-1 kubenswrapper[4740]: I1014 13:36:02.984537 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-db-sync-bn4lj"] Oct 14 13:36:03.398340 master-1 kubenswrapper[4740]: I1014 13:36:03.398266 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z669w" event={"ID":"28738a5a-94be-43a4-a55e-720365a4246b","Type":"ContainerStarted","Data":"17d5fd8df9c1cb34d0157c57c77ceaf1da15942e4119806c05cc8987c0cbf8a8"} Oct 14 13:36:03.407522 master-1 kubenswrapper[4740]: I1014 13:36:03.407455 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sx22g" event={"ID":"de58ce43-1433-46b0-9f48-d8add8324fe5","Type":"ContainerStarted","Data":"63ea5e6a1add31aaff94a0cc365478d8470e2d693de0a8f0ad07a0baf4d57f47"} Oct 14 13:36:03.415157 master-1 kubenswrapper[4740]: I1014 13:36:03.415113 4740 generic.go:334] "Generic (PLEG): container finished" podID="4864df54-8895-424b-85df-f8ce3bc5001e" containerID="da0cedfed0fb231148ec99161432858bea96a82e542c40af3d678861d039cb0c" exitCode=0 Oct 14 13:36:03.415639 master-1 kubenswrapper[4740]: I1014 13:36:03.415208 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" event={"ID":"4864df54-8895-424b-85df-f8ce3bc5001e","Type":"ContainerDied","Data":"da0cedfed0fb231148ec99161432858bea96a82e542c40af3d678861d039cb0c"} Oct 14 13:36:03.417694 master-1 kubenswrapper[4740]: I1014 13:36:03.417647 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-db-sync-bn4lj" event={"ID":"97045127-d8fb-49d6-8a81-816517ba472d","Type":"ContainerStarted","Data":"f3326807baeefd0e4f8c22e6a3aa65d5479d449bcba1f1bab0662636b7f72794"} Oct 14 13:36:03.539258 master-1 kubenswrapper[4740]: I1014 13:36:03.538679 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-z669w" podStartSLOduration=2.351601113 podStartE2EDuration="18.5386526s" podCreationTimestamp="2025-10-14 13:35:45 +0000 UTC" firstStartedPulling="2025-10-14 13:35:46.359050613 +0000 UTC m=+1772.169339972" lastFinishedPulling="2025-10-14 13:36:02.54610213 +0000 UTC m=+1788.356391459" observedRunningTime="2025-10-14 13:36:03.501311063 +0000 UTC m=+1789.311600392" watchObservedRunningTime="2025-10-14 13:36:03.5386526 +0000 UTC m=+1789.348941929" Oct 14 13:36:03.920302 master-1 kubenswrapper[4740]: I1014 13:36:03.920078 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-sx22g" podStartSLOduration=2.772780462 podStartE2EDuration="12.920032729s" podCreationTimestamp="2025-10-14 13:35:51 +0000 UTC" firstStartedPulling="2025-10-14 13:35:52.414633501 +0000 UTC m=+1778.224922830" lastFinishedPulling="2025-10-14 13:36:02.561885768 +0000 UTC m=+1788.372175097" observedRunningTime="2025-10-14 13:36:03.907910538 +0000 UTC m=+1789.718199867" watchObservedRunningTime="2025-10-14 13:36:03.920032729 +0000 UTC m=+1789.730322058" Oct 14 13:36:04.438162 master-1 kubenswrapper[4740]: I1014 13:36:04.438085 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" event={"ID":"4864df54-8895-424b-85df-f8ce3bc5001e","Type":"ContainerStarted","Data":"03d227c99e07b3086981b44d02b6e02ff2e9d58461f6d5ba85fc4e712af90b49"} Oct 14 13:36:04.475526 master-1 kubenswrapper[4740]: I1014 13:36:04.475459 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" podStartSLOduration=13.475442946 podStartE2EDuration="13.475442946s" podCreationTimestamp="2025-10-14 13:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:04.46913657 +0000 UTC m=+1790.279425919" watchObservedRunningTime="2025-10-14 13:36:04.475442946 +0000 UTC m=+1790.285732275" Oct 14 13:36:05.444925 master-1 kubenswrapper[4740]: I1014 13:36:05.444863 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:36:06.458069 master-1 kubenswrapper[4740]: I1014 13:36:06.457997 4740 generic.go:334] "Generic (PLEG): container finished" podID="de58ce43-1433-46b0-9f48-d8add8324fe5" containerID="63ea5e6a1add31aaff94a0cc365478d8470e2d693de0a8f0ad07a0baf4d57f47" exitCode=0 Oct 14 13:36:06.459306 master-1 kubenswrapper[4740]: I1014 13:36:06.459218 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sx22g" event={"ID":"de58ce43-1433-46b0-9f48-d8add8324fe5","Type":"ContainerDied","Data":"63ea5e6a1add31aaff94a0cc365478d8470e2d693de0a8f0ad07a0baf4d57f47"} Oct 14 13:36:12.468444 master-1 kubenswrapper[4740]: I1014 13:36:12.466493 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:36:15.495817 master-1 kubenswrapper[4740]: I1014 13:36:15.495697 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5755976884-m54wt"] Oct 14 13:36:15.497376 master-1 kubenswrapper[4740]: I1014 13:36:15.497330 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.500843 master-1 kubenswrapper[4740]: I1014 13:36:15.500692 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 14 13:36:15.500941 master-1 kubenswrapper[4740]: I1014 13:36:15.500669 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 13:36:15.501538 master-1 kubenswrapper[4740]: I1014 13:36:15.501517 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 14 13:36:15.501652 master-1 kubenswrapper[4740]: I1014 13:36:15.501621 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 13:36:15.501792 master-1 kubenswrapper[4740]: I1014 13:36:15.501763 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 13:36:15.525599 master-1 kubenswrapper[4740]: I1014 13:36:15.525495 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5755976884-m54wt"] Oct 14 13:36:15.585638 master-1 kubenswrapper[4740]: I1014 13:36:15.585588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-scripts\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586035 master-1 kubenswrapper[4740]: I1014 13:36:15.585976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-combined-ca-bundle\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586111 master-1 kubenswrapper[4740]: I1014 13:36:15.586099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-internal-tls-certs\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586428 master-1 kubenswrapper[4740]: I1014 13:36:15.586394 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-fernet-keys\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586507 master-1 kubenswrapper[4740]: I1014 13:36:15.586452 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-public-tls-certs\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586584 master-1 kubenswrapper[4740]: I1014 13:36:15.586556 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvzg\" (UniqueName: \"kubernetes.io/projected/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-kube-api-access-wgvzg\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586863 master-1 kubenswrapper[4740]: I1014 13:36:15.586812 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-credential-keys\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.586981 master-1 kubenswrapper[4740]: I1014 13:36:15.586957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-config-data\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689187 master-1 kubenswrapper[4740]: I1014 13:36:15.689051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-combined-ca-bundle\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689187 master-1 kubenswrapper[4740]: I1014 13:36:15.689117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-internal-tls-certs\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689187 master-1 kubenswrapper[4740]: I1014 13:36:15.689163 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-fernet-keys\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689187 master-1 kubenswrapper[4740]: I1014 13:36:15.689181 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-public-tls-certs\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689715 master-1 kubenswrapper[4740]: I1014 13:36:15.689251 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvzg\" (UniqueName: \"kubernetes.io/projected/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-kube-api-access-wgvzg\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689715 master-1 kubenswrapper[4740]: I1014 13:36:15.689291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-credential-keys\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689715 master-1 kubenswrapper[4740]: I1014 13:36:15.689332 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-config-data\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.689715 master-1 kubenswrapper[4740]: I1014 13:36:15.689370 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-scripts\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.693579 master-1 kubenswrapper[4740]: I1014 13:36:15.693533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-combined-ca-bundle\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.693755 master-1 kubenswrapper[4740]: I1014 13:36:15.693714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-internal-tls-certs\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.694344 master-1 kubenswrapper[4740]: I1014 13:36:15.694306 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-scripts\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.694528 master-1 kubenswrapper[4740]: I1014 13:36:15.694498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-credential-keys\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.694995 master-1 kubenswrapper[4740]: I1014 13:36:15.694953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-fernet-keys\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.695759 master-1 kubenswrapper[4740]: I1014 13:36:15.695719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-config-data\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.698918 master-1 kubenswrapper[4740]: I1014 13:36:15.698315 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-public-tls-certs\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:15.884390 master-1 kubenswrapper[4740]: I1014 13:36:15.884271 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvzg\" (UniqueName: \"kubernetes.io/projected/d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21-kube-api-access-wgvzg\") pod \"keystone-5755976884-m54wt\" (UID: \"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21\") " pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:16.121989 master-1 kubenswrapper[4740]: I1014 13:36:16.121920 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:19.004728 master-1 kubenswrapper[4740]: I1014 13:36:19.004651 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sx22g" Oct 14 13:36:19.078576 master-1 kubenswrapper[4740]: I1014 13:36:19.078311 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-config-data\") pod \"de58ce43-1433-46b0-9f48-d8add8324fe5\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " Oct 14 13:36:19.078576 master-1 kubenswrapper[4740]: I1014 13:36:19.078512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-combined-ca-bundle\") pod \"de58ce43-1433-46b0-9f48-d8add8324fe5\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " Oct 14 13:36:19.079011 master-1 kubenswrapper[4740]: I1014 13:36:19.078583 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-scripts\") pod \"de58ce43-1433-46b0-9f48-d8add8324fe5\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " Oct 14 13:36:19.079011 master-1 kubenswrapper[4740]: I1014 13:36:19.078717 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s5zq\" (UniqueName: \"kubernetes.io/projected/de58ce43-1433-46b0-9f48-d8add8324fe5-kube-api-access-6s5zq\") pod \"de58ce43-1433-46b0-9f48-d8add8324fe5\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " Oct 14 13:36:19.079011 master-1 kubenswrapper[4740]: I1014 13:36:19.078764 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de58ce43-1433-46b0-9f48-d8add8324fe5-logs\") pod \"de58ce43-1433-46b0-9f48-d8add8324fe5\" (UID: \"de58ce43-1433-46b0-9f48-d8add8324fe5\") " Oct 14 13:36:19.080056 master-1 kubenswrapper[4740]: I1014 13:36:19.079980 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de58ce43-1433-46b0-9f48-d8add8324fe5-logs" (OuterVolumeSpecName: "logs") pod "de58ce43-1433-46b0-9f48-d8add8324fe5" (UID: "de58ce43-1433-46b0-9f48-d8add8324fe5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:36:19.099339 master-1 kubenswrapper[4740]: I1014 13:36:19.099263 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-scripts" (OuterVolumeSpecName: "scripts") pod "de58ce43-1433-46b0-9f48-d8add8324fe5" (UID: "de58ce43-1433-46b0-9f48-d8add8324fe5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:19.100936 master-1 kubenswrapper[4740]: I1014 13:36:19.100843 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de58ce43-1433-46b0-9f48-d8add8324fe5-kube-api-access-6s5zq" (OuterVolumeSpecName: "kube-api-access-6s5zq") pod "de58ce43-1433-46b0-9f48-d8add8324fe5" (UID: "de58ce43-1433-46b0-9f48-d8add8324fe5"). InnerVolumeSpecName "kube-api-access-6s5zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:19.121924 master-1 kubenswrapper[4740]: I1014 13:36:19.121776 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-config-data" (OuterVolumeSpecName: "config-data") pod "de58ce43-1433-46b0-9f48-d8add8324fe5" (UID: "de58ce43-1433-46b0-9f48-d8add8324fe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:19.133535 master-1 kubenswrapper[4740]: I1014 13:36:19.133489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de58ce43-1433-46b0-9f48-d8add8324fe5" (UID: "de58ce43-1433-46b0-9f48-d8add8324fe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:19.181620 master-1 kubenswrapper[4740]: I1014 13:36:19.181538 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:19.181620 master-1 kubenswrapper[4740]: I1014 13:36:19.181597 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:19.181620 master-1 kubenswrapper[4740]: I1014 13:36:19.181614 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de58ce43-1433-46b0-9f48-d8add8324fe5-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:19.181620 master-1 kubenswrapper[4740]: I1014 13:36:19.181628 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s5zq\" (UniqueName: \"kubernetes.io/projected/de58ce43-1433-46b0-9f48-d8add8324fe5-kube-api-access-6s5zq\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:19.181940 master-1 kubenswrapper[4740]: I1014 13:36:19.181644 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de58ce43-1433-46b0-9f48-d8add8324fe5-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:19.618989 master-1 kubenswrapper[4740]: I1014 13:36:19.618915 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-sx22g" event={"ID":"de58ce43-1433-46b0-9f48-d8add8324fe5","Type":"ContainerDied","Data":"0e0eca49d664039761bc47e584901085cae299a55d51cbdcc2381b040eff8c84"} Oct 14 13:36:19.619196 master-1 kubenswrapper[4740]: I1014 13:36:19.618996 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e0eca49d664039761bc47e584901085cae299a55d51cbdcc2381b040eff8c84" Oct 14 13:36:19.619196 master-1 kubenswrapper[4740]: I1014 13:36:19.619007 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-sx22g" Oct 14 13:36:19.623861 master-1 kubenswrapper[4740]: I1014 13:36:19.623808 4740 generic.go:334] "Generic (PLEG): container finished" podID="07974c63-665d-43bd-a568-286d26004725" containerID="75e10b515b7197d9698e3991f1054c359ae157c60822b216a693d51035babca0" exitCode=0 Oct 14 13:36:19.623989 master-1 kubenswrapper[4740]: I1014 13:36:19.623880 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bc7jg" event={"ID":"07974c63-665d-43bd-a568-286d26004725","Type":"ContainerDied","Data":"75e10b515b7197d9698e3991f1054c359ae157c60822b216a693d51035babca0"} Oct 14 13:36:19.626506 master-1 kubenswrapper[4740]: I1014 13:36:19.626165 4740 generic.go:334] "Generic (PLEG): container finished" podID="28738a5a-94be-43a4-a55e-720365a4246b" containerID="17d5fd8df9c1cb34d0157c57c77ceaf1da15942e4119806c05cc8987c0cbf8a8" exitCode=0 Oct 14 13:36:19.626506 master-1 kubenswrapper[4740]: I1014 13:36:19.626213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z669w" event={"ID":"28738a5a-94be-43a4-a55e-720365a4246b","Type":"ContainerDied","Data":"17d5fd8df9c1cb34d0157c57c77ceaf1da15942e4119806c05cc8987c0cbf8a8"} Oct 14 13:36:20.318343 master-1 kubenswrapper[4740]: I1014 13:36:20.318202 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-958c54db4-x58ll"] Oct 14 13:36:20.319142 master-1 kubenswrapper[4740]: E1014 13:36:20.318807 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de58ce43-1433-46b0-9f48-d8add8324fe5" containerName="placement-db-sync" Oct 14 13:36:20.319142 master-1 kubenswrapper[4740]: I1014 13:36:20.318831 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="de58ce43-1433-46b0-9f48-d8add8324fe5" containerName="placement-db-sync" Oct 14 13:36:20.322033 master-1 kubenswrapper[4740]: I1014 13:36:20.321976 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="de58ce43-1433-46b0-9f48-d8add8324fe5" containerName="placement-db-sync" Oct 14 13:36:20.323768 master-1 kubenswrapper[4740]: I1014 13:36:20.323730 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.328316 master-1 kubenswrapper[4740]: I1014 13:36:20.328257 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 13:36:20.328556 master-1 kubenswrapper[4740]: I1014 13:36:20.328457 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 14 13:36:20.328689 master-1 kubenswrapper[4740]: I1014 13:36:20.328521 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 13:36:20.330115 master-1 kubenswrapper[4740]: I1014 13:36:20.328896 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 14 13:36:20.340438 master-1 kubenswrapper[4740]: I1014 13:36:20.340355 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-958c54db4-x58ll"] Oct 14 13:36:20.416427 master-1 kubenswrapper[4740]: I1014 13:36:20.416338 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-internal-tls-certs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.416427 master-1 kubenswrapper[4740]: I1014 13:36:20.416420 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mr8\" (UniqueName: \"kubernetes.io/projected/650cb763-7f1d-47ca-89c0-6a8af75df8bf-kube-api-access-c5mr8\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.416718 master-1 kubenswrapper[4740]: I1014 13:36:20.416514 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-config-data\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.416718 master-1 kubenswrapper[4740]: I1014 13:36:20.416559 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-public-tls-certs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.416718 master-1 kubenswrapper[4740]: I1014 13:36:20.416577 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650cb763-7f1d-47ca-89c0-6a8af75df8bf-logs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.416718 master-1 kubenswrapper[4740]: I1014 13:36:20.416614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-scripts\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.416718 master-1 kubenswrapper[4740]: I1014 13:36:20.416632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-combined-ca-bundle\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.518732 master-1 kubenswrapper[4740]: I1014 13:36:20.518638 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-public-tls-certs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.518732 master-1 kubenswrapper[4740]: I1014 13:36:20.518725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650cb763-7f1d-47ca-89c0-6a8af75df8bf-logs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.519075 master-1 kubenswrapper[4740]: I1014 13:36:20.518787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-scripts\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.519075 master-1 kubenswrapper[4740]: I1014 13:36:20.518805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-combined-ca-bundle\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.519075 master-1 kubenswrapper[4740]: I1014 13:36:20.518900 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-internal-tls-certs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.519075 master-1 kubenswrapper[4740]: I1014 13:36:20.518922 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mr8\" (UniqueName: \"kubernetes.io/projected/650cb763-7f1d-47ca-89c0-6a8af75df8bf-kube-api-access-c5mr8\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.519075 master-1 kubenswrapper[4740]: I1014 13:36:20.519001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-config-data\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.520220 master-1 kubenswrapper[4740]: I1014 13:36:20.519853 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/650cb763-7f1d-47ca-89c0-6a8af75df8bf-logs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.524194 master-1 kubenswrapper[4740]: I1014 13:36:20.523887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-scripts\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.524194 master-1 kubenswrapper[4740]: I1014 13:36:20.523965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-config-data\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.524194 master-1 kubenswrapper[4740]: I1014 13:36:20.524101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-combined-ca-bundle\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.525041 master-1 kubenswrapper[4740]: I1014 13:36:20.524994 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-public-tls-certs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.525841 master-1 kubenswrapper[4740]: I1014 13:36:20.525701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/650cb763-7f1d-47ca-89c0-6a8af75df8bf-internal-tls-certs\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.543671 master-1 kubenswrapper[4740]: I1014 13:36:20.543564 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mr8\" (UniqueName: \"kubernetes.io/projected/650cb763-7f1d-47ca-89c0-6a8af75df8bf-kube-api-access-c5mr8\") pod \"placement-958c54db4-x58ll\" (UID: \"650cb763-7f1d-47ca-89c0-6a8af75df8bf\") " pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:20.579329 master-1 kubenswrapper[4740]: I1014 13:36:20.579289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:21.331670 master-1 kubenswrapper[4740]: W1014 13:36:21.331614 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd79aa1ab_3a01_482b_aaa2_b1b1a94e8c21.slice/crio-a7f9f6bb4e5ce1a50737e77e11056f8274917ff0e66d20e3728b3ebc7bb15800 WatchSource:0}: Error finding container a7f9f6bb4e5ce1a50737e77e11056f8274917ff0e66d20e3728b3ebc7bb15800: Status 404 returned error can't find the container with id a7f9f6bb4e5ce1a50737e77e11056f8274917ff0e66d20e3728b3ebc7bb15800 Oct 14 13:36:21.333814 master-1 kubenswrapper[4740]: I1014 13:36:21.333781 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5755976884-m54wt"] Oct 14 13:36:21.521043 master-1 kubenswrapper[4740]: I1014 13:36:21.520983 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-958c54db4-x58ll"] Oct 14 13:36:21.655596 master-1 kubenswrapper[4740]: I1014 13:36:21.655501 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-b28pf" event={"ID":"e3f31b4a-3d7a-4274-befd-82f1bc035e07","Type":"ContainerStarted","Data":"ddac44e8e70f5e96ad9e6a23164b8004361542efab2488d438c25a765cd435a2"} Oct 14 13:36:21.663012 master-1 kubenswrapper[4740]: I1014 13:36:21.661793 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5755976884-m54wt" event={"ID":"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21","Type":"ContainerStarted","Data":"ddfcf6b080defedff3648239905bb4fe22127f8aa9d8320a10b564c51e9032bd"} Oct 14 13:36:21.663012 master-1 kubenswrapper[4740]: I1014 13:36:21.661877 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5755976884-m54wt" event={"ID":"d79aa1ab-3a01-482b-aaa2-b1b1a94e8c21","Type":"ContainerStarted","Data":"a7f9f6bb4e5ce1a50737e77e11056f8274917ff0e66d20e3728b3ebc7bb15800"} Oct 14 13:36:21.663619 master-1 kubenswrapper[4740]: I1014 13:36:21.663375 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:21.669327 master-1 kubenswrapper[4740]: I1014 13:36:21.669292 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:36:21.669849 master-1 kubenswrapper[4740]: I1014 13:36:21.669792 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="febbf3ec31b5190050844b99797abb1ce2f17e5837ec4b8b034a4dd05847c85c" exitCode=1 Oct 14 13:36:21.669957 master-1 kubenswrapper[4740]: I1014 13:36:21.669898 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"febbf3ec31b5190050844b99797abb1ce2f17e5837ec4b8b034a4dd05847c85c"} Oct 14 13:36:21.670866 master-1 kubenswrapper[4740]: I1014 13:36:21.670702 4740 scope.go:117] "RemoveContainer" containerID="febbf3ec31b5190050844b99797abb1ce2f17e5837ec4b8b034a4dd05847c85c" Oct 14 13:36:21.673946 master-1 kubenswrapper[4740]: I1014 13:36:21.673605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-db-sync-bn4lj" event={"ID":"97045127-d8fb-49d6-8a81-816517ba472d","Type":"ContainerStarted","Data":"64f6ca22fec4006b855c5f2f150e55db7483bc312f32e0ebc6f1f255917c6710"} Oct 14 13:36:21.676124 master-1 kubenswrapper[4740]: I1014 13:36:21.676038 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c54db4-x58ll" event={"ID":"650cb763-7f1d-47ca-89c0-6a8af75df8bf","Type":"ContainerStarted","Data":"d9a48f35493f04abb3eec3e9fb64930b72d2731216985510f3b499a8bc45144a"} Oct 14 13:36:21.751389 master-1 kubenswrapper[4740]: I1014 13:36:21.751330 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-combined-ca-bundle\") pod \"07974c63-665d-43bd-a568-286d26004725\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " Oct 14 13:36:21.751900 master-1 kubenswrapper[4740]: I1014 13:36:21.751853 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-config\") pod \"07974c63-665d-43bd-a568-286d26004725\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " Oct 14 13:36:21.751965 master-1 kubenswrapper[4740]: I1014 13:36:21.751950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcsd8\" (UniqueName: \"kubernetes.io/projected/07974c63-665d-43bd-a568-286d26004725-kube-api-access-kcsd8\") pod \"07974c63-665d-43bd-a568-286d26004725\" (UID: \"07974c63-665d-43bd-a568-286d26004725\") " Oct 14 13:36:21.757965 master-1 kubenswrapper[4740]: I1014 13:36:21.757906 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07974c63-665d-43bd-a568-286d26004725-kube-api-access-kcsd8" (OuterVolumeSpecName: "kube-api-access-kcsd8") pod "07974c63-665d-43bd-a568-286d26004725" (UID: "07974c63-665d-43bd-a568-286d26004725"). InnerVolumeSpecName "kube-api-access-kcsd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:21.778562 master-1 kubenswrapper[4740]: I1014 13:36:21.778443 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07974c63-665d-43bd-a568-286d26004725" (UID: "07974c63-665d-43bd-a568-286d26004725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:21.782385 master-1 kubenswrapper[4740]: I1014 13:36:21.778999 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-config" (OuterVolumeSpecName: "config") pod "07974c63-665d-43bd-a568-286d26004725" (UID: "07974c63-665d-43bd-a568-286d26004725"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:21.854960 master-1 kubenswrapper[4740]: I1014 13:36:21.854886 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:21.854960 master-1 kubenswrapper[4740]: I1014 13:36:21.854936 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcsd8\" (UniqueName: \"kubernetes.io/projected/07974c63-665d-43bd-a568-286d26004725-kube-api-access-kcsd8\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:21.854960 master-1 kubenswrapper[4740]: I1014 13:36:21.854949 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07974c63-665d-43bd-a568-286d26004725-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:22.068619 master-1 kubenswrapper[4740]: I1014 13:36:22.068515 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-b28pf" podStartSLOduration=3.76851033 podStartE2EDuration="32.068493587s" podCreationTimestamp="2025-10-14 13:35:50 +0000 UTC" firstStartedPulling="2025-10-14 13:35:51.974207233 +0000 UTC m=+1777.784496562" lastFinishedPulling="2025-10-14 13:36:20.27419049 +0000 UTC m=+1806.084479819" observedRunningTime="2025-10-14 13:36:21.987177868 +0000 UTC m=+1807.797467217" watchObservedRunningTime="2025-10-14 13:36:22.068493587 +0000 UTC m=+1807.878782916" Oct 14 13:36:22.400502 master-1 kubenswrapper[4740]: I1014 13:36:22.400342 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-db-sync-bn4lj" podStartSLOduration=10.021616009 podStartE2EDuration="27.400304086s" podCreationTimestamp="2025-10-14 13:35:55 +0000 UTC" firstStartedPulling="2025-10-14 13:36:02.983400017 +0000 UTC m=+1788.793689346" lastFinishedPulling="2025-10-14 13:36:20.362088094 +0000 UTC m=+1806.172377423" observedRunningTime="2025-10-14 13:36:22.39248273 +0000 UTC m=+1808.202772059" watchObservedRunningTime="2025-10-14 13:36:22.400304086 +0000 UTC m=+1808.210593405" Oct 14 13:36:22.467816 master-1 kubenswrapper[4740]: I1014 13:36:22.467737 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5755976884-m54wt" podStartSLOduration=7.467716067 podStartE2EDuration="7.467716067s" podCreationTimestamp="2025-10-14 13:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:22.462731346 +0000 UTC m=+1808.273020675" watchObservedRunningTime="2025-10-14 13:36:22.467716067 +0000 UTC m=+1808.278005396" Oct 14 13:36:22.687622 master-1 kubenswrapper[4740]: I1014 13:36:22.687578 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerStarted","Data":"1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833"} Oct 14 13:36:22.690725 master-1 kubenswrapper[4740]: I1014 13:36:22.690700 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bc7jg" event={"ID":"07974c63-665d-43bd-a568-286d26004725","Type":"ContainerDied","Data":"3d78b76138f77b02d3947dba66d614198e974f4b99c6cd501c2b3bf998508e18"} Oct 14 13:36:22.690871 master-1 kubenswrapper[4740]: I1014 13:36:22.690852 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d78b76138f77b02d3947dba66d614198e974f4b99c6cd501c2b3bf998508e18" Oct 14 13:36:22.691009 master-1 kubenswrapper[4740]: I1014 13:36:22.690932 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bc7jg" Oct 14 13:36:22.693424 master-1 kubenswrapper[4740]: I1014 13:36:22.693373 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c54db4-x58ll" event={"ID":"650cb763-7f1d-47ca-89c0-6a8af75df8bf","Type":"ContainerStarted","Data":"ce905d5a1634579a530189318b4b7fc6b9a4e59ca35d35e73dd0ca859c2e226f"} Oct 14 13:36:22.693506 master-1 kubenswrapper[4740]: I1014 13:36:22.693431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-958c54db4-x58ll" event={"ID":"650cb763-7f1d-47ca-89c0-6a8af75df8bf","Type":"ContainerStarted","Data":"0c871fd6f20e453b21ac8f087bdfea8ea8618e3a2db2517d1f0ad53c76f7432e"} Oct 14 13:36:22.693773 master-1 kubenswrapper[4740]: I1014 13:36:22.693748 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:22.693773 master-1 kubenswrapper[4740]: I1014 13:36:22.693774 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:22.695981 master-1 kubenswrapper[4740]: I1014 13:36:22.695671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z669w" event={"ID":"28738a5a-94be-43a4-a55e-720365a4246b","Type":"ContainerDied","Data":"bb36725723f7926e6fc1a5b5457566ac9acd8f810e9b70628fccd577f06c8180"} Oct 14 13:36:22.696218 master-1 kubenswrapper[4740]: I1014 13:36:22.696177 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb36725723f7926e6fc1a5b5457566ac9acd8f810e9b70628fccd577f06c8180" Oct 14 13:36:22.749511 master-1 kubenswrapper[4740]: I1014 13:36:22.749476 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z669w" Oct 14 13:36:22.780151 master-1 kubenswrapper[4740]: I1014 13:36:22.780084 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76p2t\" (UniqueName: \"kubernetes.io/projected/28738a5a-94be-43a4-a55e-720365a4246b-kube-api-access-76p2t\") pod \"28738a5a-94be-43a4-a55e-720365a4246b\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " Oct 14 13:36:22.780394 master-1 kubenswrapper[4740]: I1014 13:36:22.780185 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-config-data\") pod \"28738a5a-94be-43a4-a55e-720365a4246b\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " Oct 14 13:36:22.780756 master-1 kubenswrapper[4740]: I1014 13:36:22.780725 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-combined-ca-bundle\") pod \"28738a5a-94be-43a4-a55e-720365a4246b\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " Oct 14 13:36:22.780806 master-1 kubenswrapper[4740]: I1014 13:36:22.780763 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-db-sync-config-data\") pod \"28738a5a-94be-43a4-a55e-720365a4246b\" (UID: \"28738a5a-94be-43a4-a55e-720365a4246b\") " Oct 14 13:36:22.784883 master-1 kubenswrapper[4740]: I1014 13:36:22.784821 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "28738a5a-94be-43a4-a55e-720365a4246b" (UID: "28738a5a-94be-43a4-a55e-720365a4246b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:22.785831 master-1 kubenswrapper[4740]: I1014 13:36:22.785775 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28738a5a-94be-43a4-a55e-720365a4246b-kube-api-access-76p2t" (OuterVolumeSpecName: "kube-api-access-76p2t") pod "28738a5a-94be-43a4-a55e-720365a4246b" (UID: "28738a5a-94be-43a4-a55e-720365a4246b"). InnerVolumeSpecName "kube-api-access-76p2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:22.802975 master-1 kubenswrapper[4740]: I1014 13:36:22.802916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28738a5a-94be-43a4-a55e-720365a4246b" (UID: "28738a5a-94be-43a4-a55e-720365a4246b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:22.829576 master-1 kubenswrapper[4740]: I1014 13:36:22.829468 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-config-data" (OuterVolumeSpecName: "config-data") pod "28738a5a-94be-43a4-a55e-720365a4246b" (UID: "28738a5a-94be-43a4-a55e-720365a4246b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:22.883126 master-1 kubenswrapper[4740]: I1014 13:36:22.883044 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76p2t\" (UniqueName: \"kubernetes.io/projected/28738a5a-94be-43a4-a55e-720365a4246b-kube-api-access-76p2t\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:22.883126 master-1 kubenswrapper[4740]: I1014 13:36:22.883084 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:22.883126 master-1 kubenswrapper[4740]: I1014 13:36:22.883096 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:22.883126 master-1 kubenswrapper[4740]: I1014 13:36:22.883105 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/28738a5a-94be-43a4-a55e-720365a4246b-db-sync-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:23.050871 master-1 kubenswrapper[4740]: I1014 13:36:23.050737 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hd9hz" podStartSLOduration=11.845811358 podStartE2EDuration="31.050721434s" podCreationTimestamp="2025-10-14 13:35:52 +0000 UTC" firstStartedPulling="2025-10-14 13:36:01.024788038 +0000 UTC m=+1786.835077367" lastFinishedPulling="2025-10-14 13:36:20.229698114 +0000 UTC m=+1806.039987443" observedRunningTime="2025-10-14 13:36:23.049786869 +0000 UTC m=+1808.860076198" watchObservedRunningTime="2025-10-14 13:36:23.050721434 +0000 UTC m=+1808.861010763" Oct 14 13:36:23.391906 master-1 kubenswrapper[4740]: I1014 13:36:23.391349 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-958c54db4-x58ll" podStartSLOduration=3.391316215 podStartE2EDuration="3.391316215s" podCreationTimestamp="2025-10-14 13:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:23.384763241 +0000 UTC m=+1809.195052610" watchObservedRunningTime="2025-10-14 13:36:23.391316215 +0000 UTC m=+1809.201605584" Oct 14 13:36:23.707036 master-1 kubenswrapper[4740]: I1014 13:36:23.706904 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833" exitCode=1 Oct 14 13:36:23.707036 master-1 kubenswrapper[4740]: I1014 13:36:23.706957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833"} Oct 14 13:36:23.707036 master-1 kubenswrapper[4740]: I1014 13:36:23.707023 4740 scope.go:117] "RemoveContainer" containerID="febbf3ec31b5190050844b99797abb1ce2f17e5837ec4b8b034a4dd05847c85c" Oct 14 13:36:23.707625 master-1 kubenswrapper[4740]: I1014 13:36:23.707033 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z669w" Oct 14 13:36:23.707913 master-1 kubenswrapper[4740]: I1014 13:36:23.707877 4740 scope.go:117] "RemoveContainer" containerID="1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833" Oct 14 13:36:23.708432 master-1 kubenswrapper[4740]: E1014 13:36:23.708353 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:36:24.718310 master-1 kubenswrapper[4740]: I1014 13:36:24.718122 4740 scope.go:117] "RemoveContainer" containerID="1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833" Oct 14 13:36:24.719428 master-1 kubenswrapper[4740]: E1014 13:36:24.718437 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:36:26.100957 master-1 kubenswrapper[4740]: I1014 13:36:26.100852 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55c4fcb4cb-xfg9j"] Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: E1014 13:36:26.101206 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28738a5a-94be-43a4-a55e-720365a4246b" containerName="glance-db-sync" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.101223 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="28738a5a-94be-43a4-a55e-720365a4246b" containerName="glance-db-sync" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: E1014 13:36:26.101282 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07974c63-665d-43bd-a568-286d26004725" containerName="neutron-db-sync" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.101298 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="07974c63-665d-43bd-a568-286d26004725" containerName="neutron-db-sync" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.101470 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="07974c63-665d-43bd-a568-286d26004725" containerName="neutron-db-sync" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.101494 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="28738a5a-94be-43a4-a55e-720365a4246b" containerName="glance-db-sync" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.102588 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.106339 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.106663 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 13:36:26.153106 master-1 kubenswrapper[4740]: I1014 13:36:26.106837 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 14 13:36:26.200638 master-1 kubenswrapper[4740]: I1014 13:36:26.200570 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55c4fcb4cb-xfg9j"] Oct 14 13:36:26.254147 master-1 kubenswrapper[4740]: I1014 13:36:26.254019 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2mg9\" (UniqueName: \"kubernetes.io/projected/0401d960-0b3b-4a30-93de-4dc6064a8943-kube-api-access-p2mg9\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.254473 master-1 kubenswrapper[4740]: I1014 13:36:26.254416 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-ovndb-tls-certs\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.254563 master-1 kubenswrapper[4740]: I1014 13:36:26.254548 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-config\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.255012 master-1 kubenswrapper[4740]: I1014 13:36:26.254924 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-combined-ca-bundle\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.255199 master-1 kubenswrapper[4740]: I1014 13:36:26.255166 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-httpd-config\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.360866 master-1 kubenswrapper[4740]: I1014 13:36:26.356559 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-ovndb-tls-certs\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.360866 master-1 kubenswrapper[4740]: I1014 13:36:26.356636 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-config\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.360866 master-1 kubenswrapper[4740]: I1014 13:36:26.356707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-combined-ca-bundle\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.360866 master-1 kubenswrapper[4740]: I1014 13:36:26.356774 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-httpd-config\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.360866 master-1 kubenswrapper[4740]: I1014 13:36:26.356814 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2mg9\" (UniqueName: \"kubernetes.io/projected/0401d960-0b3b-4a30-93de-4dc6064a8943-kube-api-access-p2mg9\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.366319 master-1 kubenswrapper[4740]: I1014 13:36:26.365549 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-httpd-config\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.366319 master-1 kubenswrapper[4740]: I1014 13:36:26.365926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-combined-ca-bundle\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.366547 master-1 kubenswrapper[4740]: I1014 13:36:26.366347 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-ovndb-tls-certs\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.368468 master-1 kubenswrapper[4740]: I1014 13:36:26.368374 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-config\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.486182 master-1 kubenswrapper[4740]: I1014 13:36:26.486044 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2mg9\" (UniqueName: \"kubernetes.io/projected/0401d960-0b3b-4a30-93de-4dc6064a8943-kube-api-access-p2mg9\") pod \"neutron-55c4fcb4cb-xfg9j\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:26.719806 master-1 kubenswrapper[4740]: I1014 13:36:26.719648 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:27.747993 master-1 kubenswrapper[4740]: I1014 13:36:27.747926 4740 generic.go:334] "Generic (PLEG): container finished" podID="e3f31b4a-3d7a-4274-befd-82f1bc035e07" containerID="ddac44e8e70f5e96ad9e6a23164b8004361542efab2488d438c25a765cd435a2" exitCode=0 Oct 14 13:36:27.749705 master-1 kubenswrapper[4740]: I1014 13:36:27.748862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-b28pf" event={"ID":"e3f31b4a-3d7a-4274-befd-82f1bc035e07","Type":"ContainerDied","Data":"ddac44e8e70f5e96ad9e6a23164b8004361542efab2488d438c25a765cd435a2"} Oct 14 13:36:28.086135 master-1 kubenswrapper[4740]: I1014 13:36:28.085565 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64c48f9bcf-6pg9l"] Oct 14 13:36:28.092938 master-1 kubenswrapper[4740]: I1014 13:36:28.092891 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.173412 master-1 kubenswrapper[4740]: I1014 13:36:28.173341 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64c48f9bcf-6pg9l"] Oct 14 13:36:28.203896 master-1 kubenswrapper[4740]: I1014 13:36:28.203810 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-svc\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.204507 master-1 kubenswrapper[4740]: I1014 13:36:28.203904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvph\" (UniqueName: \"kubernetes.io/projected/4698540e-d270-4f76-8a8e-f7c3eea7b601-kube-api-access-kwvph\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.204507 master-1 kubenswrapper[4740]: I1014 13:36:28.203978 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-swift-storage-0\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.204507 master-1 kubenswrapper[4740]: I1014 13:36:28.204054 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-nb\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.204507 master-1 kubenswrapper[4740]: I1014 13:36:28.204111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-sb\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.204507 master-1 kubenswrapper[4740]: I1014 13:36:28.204150 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-config\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.310689 master-1 kubenswrapper[4740]: I1014 13:36:28.309683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-sb\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.310689 master-1 kubenswrapper[4740]: I1014 13:36:28.309817 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-config\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.310689 master-1 kubenswrapper[4740]: I1014 13:36:28.310164 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-svc\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.310689 master-1 kubenswrapper[4740]: I1014 13:36:28.310283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwvph\" (UniqueName: \"kubernetes.io/projected/4698540e-d270-4f76-8a8e-f7c3eea7b601-kube-api-access-kwvph\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.310689 master-1 kubenswrapper[4740]: I1014 13:36:28.310411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-swift-storage-0\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.310689 master-1 kubenswrapper[4740]: I1014 13:36:28.310496 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-nb\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.311821 master-1 kubenswrapper[4740]: I1014 13:36:28.311745 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-sb\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.312555 master-1 kubenswrapper[4740]: I1014 13:36:28.312480 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-config\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.313145 master-1 kubenswrapper[4740]: I1014 13:36:28.313101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-svc\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.313645 master-1 kubenswrapper[4740]: I1014 13:36:28.313593 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-swift-storage-0\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.314499 master-1 kubenswrapper[4740]: I1014 13:36:28.314312 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-nb\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.499969 master-1 kubenswrapper[4740]: I1014 13:36:28.499900 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwvph\" (UniqueName: \"kubernetes.io/projected/4698540e-d270-4f76-8a8e-f7c3eea7b601-kube-api-access-kwvph\") pod \"dnsmasq-dns-64c48f9bcf-6pg9l\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:28.722414 master-1 kubenswrapper[4740]: I1014 13:36:28.722324 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:29.771753 master-1 kubenswrapper[4740]: I1014 13:36:29.771673 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-b28pf" event={"ID":"e3f31b4a-3d7a-4274-befd-82f1bc035e07","Type":"ContainerDied","Data":"b83e1b2cd71b2c2e01416cae88c4656a765374de56166c82dfd4c610b06c8973"} Oct 14 13:36:29.771753 master-1 kubenswrapper[4740]: I1014 13:36:29.771738 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b83e1b2cd71b2c2e01416cae88c4656a765374de56166c82dfd4c610b06c8973" Oct 14 13:36:29.777780 master-1 kubenswrapper[4740]: I1014 13:36:29.773481 4740 generic.go:334] "Generic (PLEG): container finished" podID="97045127-d8fb-49d6-8a81-816517ba472d" containerID="64f6ca22fec4006b855c5f2f150e55db7483bc312f32e0ebc6f1f255917c6710" exitCode=0 Oct 14 13:36:29.777780 master-1 kubenswrapper[4740]: I1014 13:36:29.773549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-db-sync-bn4lj" event={"ID":"97045127-d8fb-49d6-8a81-816517ba472d","Type":"ContainerDied","Data":"64f6ca22fec4006b855c5f2f150e55db7483bc312f32e0ebc6f1f255917c6710"} Oct 14 13:36:29.777780 master-1 kubenswrapper[4740]: I1014 13:36:29.775672 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" event={"ID":"4698540e-d270-4f76-8a8e-f7c3eea7b601","Type":"ContainerStarted","Data":"1c9d6187e77b6092cc8d666d7bbfb97e10b4c0710f17c52dcadebcefb94b46b9"} Oct 14 13:36:29.787658 master-1 kubenswrapper[4740]: I1014 13:36:29.787612 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-b28pf" Oct 14 13:36:29.858842 master-1 kubenswrapper[4740]: I1014 13:36:29.858797 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjm2l\" (UniqueName: \"kubernetes.io/projected/e3f31b4a-3d7a-4274-befd-82f1bc035e07-kube-api-access-hjm2l\") pod \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " Oct 14 13:36:29.860045 master-1 kubenswrapper[4740]: I1014 13:36:29.859950 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-config-data\") pod \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " Oct 14 13:36:29.860097 master-1 kubenswrapper[4740]: I1014 13:36:29.860043 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-combined-ca-bundle\") pod \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\" (UID: \"e3f31b4a-3d7a-4274-befd-82f1bc035e07\") " Oct 14 13:36:29.862470 master-1 kubenswrapper[4740]: I1014 13:36:29.862423 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f31b4a-3d7a-4274-befd-82f1bc035e07-kube-api-access-hjm2l" (OuterVolumeSpecName: "kube-api-access-hjm2l") pod "e3f31b4a-3d7a-4274-befd-82f1bc035e07" (UID: "e3f31b4a-3d7a-4274-befd-82f1bc035e07"). InnerVolumeSpecName "kube-api-access-hjm2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:29.864402 master-1 kubenswrapper[4740]: I1014 13:36:29.864352 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64c48f9bcf-6pg9l"] Oct 14 13:36:29.894602 master-1 kubenswrapper[4740]: I1014 13:36:29.894515 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3f31b4a-3d7a-4274-befd-82f1bc035e07" (UID: "e3f31b4a-3d7a-4274-befd-82f1bc035e07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:29.921336 master-1 kubenswrapper[4740]: I1014 13:36:29.921252 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-config-data" (OuterVolumeSpecName: "config-data") pod "e3f31b4a-3d7a-4274-befd-82f1bc035e07" (UID: "e3f31b4a-3d7a-4274-befd-82f1bc035e07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:29.963378 master-1 kubenswrapper[4740]: I1014 13:36:29.963308 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:29.963378 master-1 kubenswrapper[4740]: I1014 13:36:29.963355 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f31b4a-3d7a-4274-befd-82f1bc035e07-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:29.963378 master-1 kubenswrapper[4740]: I1014 13:36:29.963368 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjm2l\" (UniqueName: \"kubernetes.io/projected/e3f31b4a-3d7a-4274-befd-82f1bc035e07-kube-api-access-hjm2l\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:30.245941 master-1 kubenswrapper[4740]: W1014 13:36:30.245851 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0401d960_0b3b_4a30_93de_4dc6064a8943.slice/crio-d2bb76996b0df7d1d10c465556d9194de87115973f9ee9e76910685ae5ec2966 WatchSource:0}: Error finding container d2bb76996b0df7d1d10c465556d9194de87115973f9ee9e76910685ae5ec2966: Status 404 returned error can't find the container with id d2bb76996b0df7d1d10c465556d9194de87115973f9ee9e76910685ae5ec2966 Oct 14 13:36:30.417640 master-1 kubenswrapper[4740]: I1014 13:36:30.417543 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55c4fcb4cb-xfg9j"] Oct 14 13:36:30.794574 master-1 kubenswrapper[4740]: I1014 13:36:30.794471 4740 generic.go:334] "Generic (PLEG): container finished" podID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerID="a684d1da92f0debc0941a2955ca26c91d985f0d30578376353a742194e8eacc6" exitCode=0 Oct 14 13:36:30.795074 master-1 kubenswrapper[4740]: I1014 13:36:30.794593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" event={"ID":"4698540e-d270-4f76-8a8e-f7c3eea7b601","Type":"ContainerDied","Data":"a684d1da92f0debc0941a2955ca26c91d985f0d30578376353a742194e8eacc6"} Oct 14 13:36:30.797625 master-1 kubenswrapper[4740]: I1014 13:36:30.797568 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c4fcb4cb-xfg9j" event={"ID":"0401d960-0b3b-4a30-93de-4dc6064a8943","Type":"ContainerStarted","Data":"a5fc8ecdc6b86053832093cfe590deac075689a8f33aebf33bb2e2ce8db6920c"} Oct 14 13:36:30.797625 master-1 kubenswrapper[4740]: I1014 13:36:30.797598 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c4fcb4cb-xfg9j" event={"ID":"0401d960-0b3b-4a30-93de-4dc6064a8943","Type":"ContainerStarted","Data":"d2bb76996b0df7d1d10c465556d9194de87115973f9ee9e76910685ae5ec2966"} Oct 14 13:36:30.797738 master-1 kubenswrapper[4740]: I1014 13:36:30.797636 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-b28pf" Oct 14 13:36:31.731260 master-1 kubenswrapper[4740]: I1014 13:36:31.731166 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:36:31.808759 master-1 kubenswrapper[4740]: I1014 13:36:31.808692 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c4fcb4cb-xfg9j" event={"ID":"0401d960-0b3b-4a30-93de-4dc6064a8943","Type":"ContainerStarted","Data":"07fb7818c23e8c64e34cf8ad9848c0665dc4020ff4bf533314698979d8687fcf"} Oct 14 13:36:31.809255 master-1 kubenswrapper[4740]: I1014 13:36:31.808853 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:31.811810 master-1 kubenswrapper[4740]: I1014 13:36:31.811757 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-db-sync-bn4lj" event={"ID":"97045127-d8fb-49d6-8a81-816517ba472d","Type":"ContainerDied","Data":"f3326807baeefd0e4f8c22e6a3aa65d5479d449bcba1f1bab0662636b7f72794"} Oct 14 13:36:31.811874 master-1 kubenswrapper[4740]: I1014 13:36:31.811811 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3326807baeefd0e4f8c22e6a3aa65d5479d449bcba1f1bab0662636b7f72794" Oct 14 13:36:31.812101 master-1 kubenswrapper[4740]: I1014 13:36:31.812058 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-db-sync-bn4lj" Oct 14 13:36:31.814512 master-1 kubenswrapper[4740]: I1014 13:36:31.814474 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" event={"ID":"4698540e-d270-4f76-8a8e-f7c3eea7b601","Type":"ContainerStarted","Data":"6463cfb64f4b942a367a49a6e1764afdbfbfbb9ce54aa48ab514f1215ac22000"} Oct 14 13:36:31.814669 master-1 kubenswrapper[4740]: I1014 13:36:31.814636 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:31.816189 master-1 kubenswrapper[4740]: I1014 13:36:31.816142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9gt2\" (UniqueName: \"kubernetes.io/projected/97045127-d8fb-49d6-8a81-816517ba472d-kube-api-access-r9gt2\") pod \"97045127-d8fb-49d6-8a81-816517ba472d\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " Oct 14 13:36:31.816337 master-1 kubenswrapper[4740]: I1014 13:36:31.816301 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97045127-d8fb-49d6-8a81-816517ba472d-etc-machine-id\") pod \"97045127-d8fb-49d6-8a81-816517ba472d\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " Oct 14 13:36:31.816379 master-1 kubenswrapper[4740]: I1014 13:36:31.816349 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-combined-ca-bundle\") pod \"97045127-d8fb-49d6-8a81-816517ba472d\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " Oct 14 13:36:31.816417 master-1 kubenswrapper[4740]: I1014 13:36:31.816381 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-db-sync-config-data\") pod \"97045127-d8fb-49d6-8a81-816517ba472d\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " Oct 14 13:36:31.816623 master-1 kubenswrapper[4740]: I1014 13:36:31.816469 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-scripts\") pod \"97045127-d8fb-49d6-8a81-816517ba472d\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " Oct 14 13:36:31.816687 master-1 kubenswrapper[4740]: I1014 13:36:31.816624 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-config-data\") pod \"97045127-d8fb-49d6-8a81-816517ba472d\" (UID: \"97045127-d8fb-49d6-8a81-816517ba472d\") " Oct 14 13:36:31.816901 master-1 kubenswrapper[4740]: I1014 13:36:31.816858 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97045127-d8fb-49d6-8a81-816517ba472d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "97045127-d8fb-49d6-8a81-816517ba472d" (UID: "97045127-d8fb-49d6-8a81-816517ba472d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:31.817148 master-1 kubenswrapper[4740]: I1014 13:36:31.817091 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97045127-d8fb-49d6-8a81-816517ba472d-etc-machine-id\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:31.821738 master-1 kubenswrapper[4740]: I1014 13:36:31.821656 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "97045127-d8fb-49d6-8a81-816517ba472d" (UID: "97045127-d8fb-49d6-8a81-816517ba472d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:31.821960 master-1 kubenswrapper[4740]: I1014 13:36:31.821915 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-scripts" (OuterVolumeSpecName: "scripts") pod "97045127-d8fb-49d6-8a81-816517ba472d" (UID: "97045127-d8fb-49d6-8a81-816517ba472d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:31.823393 master-1 kubenswrapper[4740]: I1014 13:36:31.823315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97045127-d8fb-49d6-8a81-816517ba472d-kube-api-access-r9gt2" (OuterVolumeSpecName: "kube-api-access-r9gt2") pod "97045127-d8fb-49d6-8a81-816517ba472d" (UID: "97045127-d8fb-49d6-8a81-816517ba472d"). InnerVolumeSpecName "kube-api-access-r9gt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:31.855647 master-1 kubenswrapper[4740]: I1014 13:36:31.855551 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97045127-d8fb-49d6-8a81-816517ba472d" (UID: "97045127-d8fb-49d6-8a81-816517ba472d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:31.899087 master-1 kubenswrapper[4740]: I1014 13:36:31.898991 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-config-data" (OuterVolumeSpecName: "config-data") pod "97045127-d8fb-49d6-8a81-816517ba472d" (UID: "97045127-d8fb-49d6-8a81-816517ba472d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:31.919576 master-1 kubenswrapper[4740]: I1014 13:36:31.919165 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9gt2\" (UniqueName: \"kubernetes.io/projected/97045127-d8fb-49d6-8a81-816517ba472d-kube-api-access-r9gt2\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:31.919576 master-1 kubenswrapper[4740]: I1014 13:36:31.919217 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:31.919576 master-1 kubenswrapper[4740]: I1014 13:36:31.919250 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-db-sync-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:31.919576 master-1 kubenswrapper[4740]: I1014 13:36:31.919262 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:31.919576 master-1 kubenswrapper[4740]: I1014 13:36:31.919273 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97045127-d8fb-49d6-8a81-816517ba472d-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:32.259595 master-1 kubenswrapper[4740]: I1014 13:36:32.259492 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55c4fcb4cb-xfg9j" podStartSLOduration=7.259472667 podStartE2EDuration="7.259472667s" podCreationTimestamp="2025-10-14 13:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:32.073381657 +0000 UTC m=+1817.883670986" watchObservedRunningTime="2025-10-14 13:36:32.259472667 +0000 UTC m=+1818.069761996" Oct 14 13:36:32.262551 master-1 kubenswrapper[4740]: I1014 13:36:32.262479 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:36:32.262925 master-1 kubenswrapper[4740]: E1014 13:36:32.262890 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f31b4a-3d7a-4274-befd-82f1bc035e07" containerName="heat-db-sync" Oct 14 13:36:32.262925 master-1 kubenswrapper[4740]: I1014 13:36:32.262909 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f31b4a-3d7a-4274-befd-82f1bc035e07" containerName="heat-db-sync" Oct 14 13:36:32.263044 master-1 kubenswrapper[4740]: E1014 13:36:32.262934 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97045127-d8fb-49d6-8a81-816517ba472d" containerName="cinder-46645-db-sync" Oct 14 13:36:32.263044 master-1 kubenswrapper[4740]: I1014 13:36:32.262942 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="97045127-d8fb-49d6-8a81-816517ba472d" containerName="cinder-46645-db-sync" Oct 14 13:36:32.263142 master-1 kubenswrapper[4740]: I1014 13:36:32.263094 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f31b4a-3d7a-4274-befd-82f1bc035e07" containerName="heat-db-sync" Oct 14 13:36:32.263142 master-1 kubenswrapper[4740]: I1014 13:36:32.263117 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="97045127-d8fb-49d6-8a81-816517ba472d" containerName="cinder-46645-db-sync" Oct 14 13:36:32.264056 master-1 kubenswrapper[4740]: I1014 13:36:32.264016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.266625 master-1 kubenswrapper[4740]: I1014 13:36:32.266524 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" podStartSLOduration=5.266489452 podStartE2EDuration="5.266489452s" podCreationTimestamp="2025-10-14 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:32.100527631 +0000 UTC m=+1817.910816960" watchObservedRunningTime="2025-10-14 13:36:32.266489452 +0000 UTC m=+1818.076778781" Oct 14 13:36:32.267887 master-1 kubenswrapper[4740]: I1014 13:36:32.267839 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 13:36:32.269254 master-1 kubenswrapper[4740]: I1014 13:36:32.269200 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-46645-default-external-config-data" Oct 14 13:36:32.428631 master-1 kubenswrapper[4740]: I1014 13:36:32.428552 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-scripts\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.428836 master-1 kubenswrapper[4740]: I1014 13:36:32.428649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-combined-ca-bundle\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.428836 master-1 kubenswrapper[4740]: I1014 13:36:32.428766 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gbn\" (UniqueName: \"kubernetes.io/projected/e230307d-3fb2-44c5-8259-563e509c9f68-kube-api-access-r5gbn\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.428836 master-1 kubenswrapper[4740]: I1014 13:36:32.428826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-config-data\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.428952 master-1 kubenswrapper[4740]: I1014 13:36:32.428873 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-httpd-run\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.428952 master-1 kubenswrapper[4740]: I1014 13:36:32.428920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-logs\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.429017 master-1 kubenswrapper[4740]: I1014 13:36:32.428965 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531366 master-1 kubenswrapper[4740]: I1014 13:36:32.531170 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-httpd-run\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531366 master-1 kubenswrapper[4740]: I1014 13:36:32.531272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-logs\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531366 master-1 kubenswrapper[4740]: I1014 13:36:32.531311 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531828 master-1 kubenswrapper[4740]: I1014 13:36:32.531385 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-scripts\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531828 master-1 kubenswrapper[4740]: I1014 13:36:32.531417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-combined-ca-bundle\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531828 master-1 kubenswrapper[4740]: I1014 13:36:32.531490 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5gbn\" (UniqueName: \"kubernetes.io/projected/e230307d-3fb2-44c5-8259-563e509c9f68-kube-api-access-r5gbn\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.531828 master-1 kubenswrapper[4740]: I1014 13:36:32.531531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-config-data\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.532178 master-1 kubenswrapper[4740]: I1014 13:36:32.532053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-httpd-run\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.532178 master-1 kubenswrapper[4740]: I1014 13:36:32.532136 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-logs\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.533868 master-1 kubenswrapper[4740]: I1014 13:36:32.533802 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:36:32.533868 master-1 kubenswrapper[4740]: I1014 13:36:32.533858 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/108a495d9f41cc9de81d4e0f645aaa659a8dff504f4fe9597cfbed6c597a62b0/globalmount\"" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.535062 master-1 kubenswrapper[4740]: I1014 13:36:32.534998 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-scripts\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.537022 master-1 kubenswrapper[4740]: I1014 13:36:32.536951 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-combined-ca-bundle\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.542703 master-1 kubenswrapper[4740]: I1014 13:36:32.542639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-config-data\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.555727 master-1 kubenswrapper[4740]: I1014 13:36:32.555639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5gbn\" (UniqueName: \"kubernetes.io/projected/e230307d-3fb2-44c5-8259-563e509c9f68-kube-api-access-r5gbn\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:32.629393 master-1 kubenswrapper[4740]: I1014 13:36:32.629304 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:36:34.208059 master-1 kubenswrapper[4740]: I1014 13:36:34.207986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:34.687949 master-1 kubenswrapper[4740]: I1014 13:36:34.687840 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:35.120535 master-1 kubenswrapper[4740]: I1014 13:36:35.120466 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:35.122175 master-1 kubenswrapper[4740]: I1014 13:36:35.122144 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.124388 master-1 kubenswrapper[4740]: I1014 13:36:35.124312 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-scripts" Oct 14 13:36:35.125042 master-1 kubenswrapper[4740]: I1014 13:36:35.125017 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-scheduler-config-data" Oct 14 13:36:35.130250 master-1 kubenswrapper[4740]: I1014 13:36:35.130175 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-config-data" Oct 14 13:36:35.148154 master-1 kubenswrapper[4740]: I1014 13:36:35.148084 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:35.207796 master-1 kubenswrapper[4740]: I1014 13:36:35.207725 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f653157f-4652-49a8-a3f6-0d952ce477f5-etc-machine-id\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.208026 master-1 kubenswrapper[4740]: I1014 13:36:35.208012 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-scripts\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.208150 master-1 kubenswrapper[4740]: I1014 13:36:35.208133 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-combined-ca-bundle\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.208552 master-1 kubenswrapper[4740]: I1014 13:36:35.208535 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data-custom\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.208648 master-1 kubenswrapper[4740]: I1014 13:36:35.208636 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.209003 master-1 kubenswrapper[4740]: I1014 13:36:35.208962 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqg7l\" (UniqueName: \"kubernetes.io/projected/f653157f-4652-49a8-a3f6-0d952ce477f5-kube-api-access-xqg7l\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.311287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f653157f-4652-49a8-a3f6-0d952ce477f5-etc-machine-id\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.311352 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-scripts\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.311422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-combined-ca-bundle\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.311447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data-custom\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.311484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.311548 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqg7l\" (UniqueName: \"kubernetes.io/projected/f653157f-4652-49a8-a3f6-0d952ce477f5-kube-api-access-xqg7l\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.320279 master-1 kubenswrapper[4740]: I1014 13:36:35.312119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f653157f-4652-49a8-a3f6-0d952ce477f5-etc-machine-id\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.326677 master-1 kubenswrapper[4740]: I1014 13:36:35.324711 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-scripts\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.332293 master-1 kubenswrapper[4740]: I1014 13:36:35.330769 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.332684 master-1 kubenswrapper[4740]: I1014 13:36:35.332657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-combined-ca-bundle\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.340309 master-1 kubenswrapper[4740]: I1014 13:36:35.340134 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data-custom\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.365322 master-1 kubenswrapper[4740]: I1014 13:36:35.365257 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:35.366492 master-1 kubenswrapper[4740]: I1014 13:36:35.366362 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqg7l\" (UniqueName: \"kubernetes.io/projected/f653157f-4652-49a8-a3f6-0d952ce477f5-kube-api-access-xqg7l\") pod \"cinder-46645-scheduler-0\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.369103 master-1 kubenswrapper[4740]: I1014 13:36:35.369086 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.375129 master-1 kubenswrapper[4740]: I1014 13:36:35.374548 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-backup-config-data" Oct 14 13:36:35.416693 master-1 kubenswrapper[4740]: I1014 13:36:35.416560 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-sys\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.416693 master-1 kubenswrapper[4740]: I1014 13:36:35.416679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-machine-id\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416724 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-nvme\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416746 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-dev\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-cinder\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416813 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data-custom\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416835 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-lib-modules\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416887 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-lib-cinder\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416915 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2lx\" (UniqueName: \"kubernetes.io/projected/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-kube-api-access-rr2lx\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.416977 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-brick\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.417626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-run\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.418056 master-1 kubenswrapper[4740]: I1014 13:36:35.417816 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-iscsi\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.422779 master-1 kubenswrapper[4740]: I1014 13:36:35.418528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.422779 master-1 kubenswrapper[4740]: I1014 13:36:35.418720 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-scripts\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.422779 master-1 kubenswrapper[4740]: I1014 13:36:35.418754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-combined-ca-bundle\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.439169 master-1 kubenswrapper[4740]: I1014 13:36:35.438982 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:35.482984 master-1 kubenswrapper[4740]: I1014 13:36:35.482917 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523381 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-run\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523440 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-iscsi\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523543 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-scripts\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523569 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-combined-ca-bundle\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-sys\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523680 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-machine-id\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-nvme\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-dev\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-cinder\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523818 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data-custom\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523846 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-lib-modules\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-lib-cinder\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2lx\" (UniqueName: \"kubernetes.io/projected/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-kube-api-access-rr2lx\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.523953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-brick\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.525015 master-1 kubenswrapper[4740]: I1014 13:36:35.524303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-brick\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.531999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-machine-id\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.532175 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-nvme\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.532246 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-dev\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.532382 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-cinder\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.533377 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-run\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.533417 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-lib-modules\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.533441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-iscsi\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534639 master-1 kubenswrapper[4740]: I1014 13:36:35.534417 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-sys\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.534936 master-1 kubenswrapper[4740]: I1014 13:36:35.534673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-lib-cinder\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.536561 master-1 kubenswrapper[4740]: I1014 13:36:35.536421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data-custom\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.537042 master-1 kubenswrapper[4740]: I1014 13:36:35.536909 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.538786 master-1 kubenswrapper[4740]: I1014 13:36:35.538450 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-combined-ca-bundle\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.538786 master-1 kubenswrapper[4740]: I1014 13:36:35.538723 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-scripts\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.568064 master-1 kubenswrapper[4740]: I1014 13:36:35.567975 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2lx\" (UniqueName: \"kubernetes.io/projected/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-kube-api-access-rr2lx\") pod \"cinder-46645-backup-0\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.667980 master-1 kubenswrapper[4740]: I1014 13:36:35.667856 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64c48f9bcf-6pg9l"] Oct 14 13:36:35.669222 master-1 kubenswrapper[4740]: I1014 13:36:35.668144 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerName="dnsmasq-dns" containerID="cri-o://6463cfb64f4b942a367a49a6e1764afdbfbfbb9ce54aa48ab514f1215ac22000" gracePeriod=10 Oct 14 13:36:35.699028 master-1 kubenswrapper[4740]: I1014 13:36:35.698616 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:36:35.789897 master-1 kubenswrapper[4740]: I1014 13:36:35.789808 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:35.868806 master-1 kubenswrapper[4740]: I1014 13:36:35.868741 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"e230307d-3fb2-44c5-8259-563e509c9f68","Type":"ContainerStarted","Data":"936e517acfa6466126b44ca7a20619dfd79298ad00adadb9fd2115b3a07b87f8"} Oct 14 13:36:35.871338 master-1 kubenswrapper[4740]: I1014 13:36:35.871291 4740 generic.go:334] "Generic (PLEG): container finished" podID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerID="6463cfb64f4b942a367a49a6e1764afdbfbfbb9ce54aa48ab514f1215ac22000" exitCode=0 Oct 14 13:36:35.871338 master-1 kubenswrapper[4740]: I1014 13:36:35.871320 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" event={"ID":"4698540e-d270-4f76-8a8e-f7c3eea7b601","Type":"ContainerDied","Data":"6463cfb64f4b942a367a49a6e1764afdbfbfbb9ce54aa48ab514f1215ac22000"} Oct 14 13:36:36.082267 master-1 kubenswrapper[4740]: I1014 13:36:36.068082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:36.082267 master-1 kubenswrapper[4740]: I1014 13:36:36.077389 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:36.082267 master-1 kubenswrapper[4740]: I1014 13:36:36.080772 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.098540 master-1 kubenswrapper[4740]: I1014 13:36:36.094669 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-api-config-data" Oct 14 13:36:36.117813 master-1 kubenswrapper[4740]: I1014 13:36:36.117447 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173623 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-combined-ca-bundle\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173683 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e84a47ae-f765-4b20-b59c-958d505a497d-logs\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173741 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnpx\" (UniqueName: \"kubernetes.io/projected/e84a47ae-f765-4b20-b59c-958d505a497d-kube-api-access-nfnpx\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173798 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data-custom\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173833 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-scripts\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173883 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.174004 master-1 kubenswrapper[4740]: I1014 13:36:36.173903 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e84a47ae-f765-4b20-b59c-958d505a497d-etc-machine-id\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.275851 master-1 kubenswrapper[4740]: I1014 13:36:36.275793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfnpx\" (UniqueName: \"kubernetes.io/projected/e84a47ae-f765-4b20-b59c-958d505a497d-kube-api-access-nfnpx\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.276597 master-1 kubenswrapper[4740]: I1014 13:36:36.276579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data-custom\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.276700 master-1 kubenswrapper[4740]: I1014 13:36:36.276687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-scripts\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.276848 master-1 kubenswrapper[4740]: I1014 13:36:36.276833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.276944 master-1 kubenswrapper[4740]: I1014 13:36:36.276927 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e84a47ae-f765-4b20-b59c-958d505a497d-etc-machine-id\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.277107 master-1 kubenswrapper[4740]: I1014 13:36:36.277088 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-combined-ca-bundle\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.277251 master-1 kubenswrapper[4740]: I1014 13:36:36.277194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e84a47ae-f765-4b20-b59c-958d505a497d-logs\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.277924 master-1 kubenswrapper[4740]: I1014 13:36:36.277904 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e84a47ae-f765-4b20-b59c-958d505a497d-logs\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.281099 master-1 kubenswrapper[4740]: I1014 13:36:36.280990 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-scripts\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.281872 master-1 kubenswrapper[4740]: I1014 13:36:36.281626 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e84a47ae-f765-4b20-b59c-958d505a497d-etc-machine-id\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.284460 master-1 kubenswrapper[4740]: I1014 13:36:36.284096 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.287631 master-1 kubenswrapper[4740]: I1014 13:36:36.287555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data-custom\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.295603 master-1 kubenswrapper[4740]: I1014 13:36:36.295552 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-combined-ca-bundle\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.323631 master-1 kubenswrapper[4740]: I1014 13:36:36.322185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfnpx\" (UniqueName: \"kubernetes.io/projected/e84a47ae-f765-4b20-b59c-958d505a497d-kube-api-access-nfnpx\") pod \"cinder-46645-api-2\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.445897 master-1 kubenswrapper[4740]: I1014 13:36:36.445839 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-api-2" Oct 14 13:36:36.715273 master-1 kubenswrapper[4740]: I1014 13:36:36.715039 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:36.902597 master-1 kubenswrapper[4740]: I1014 13:36:36.902544 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-swift-storage-0\") pod \"4698540e-d270-4f76-8a8e-f7c3eea7b601\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " Oct 14 13:36:36.902597 master-1 kubenswrapper[4740]: I1014 13:36:36.902591 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-sb\") pod \"4698540e-d270-4f76-8a8e-f7c3eea7b601\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " Oct 14 13:36:36.902597 master-1 kubenswrapper[4740]: I1014 13:36:36.902697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-config\") pod \"4698540e-d270-4f76-8a8e-f7c3eea7b601\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " Oct 14 13:36:36.903986 master-1 kubenswrapper[4740]: I1014 13:36:36.903710 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-nb\") pod \"4698540e-d270-4f76-8a8e-f7c3eea7b601\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " Oct 14 13:36:36.904050 master-1 kubenswrapper[4740]: I1014 13:36:36.903993 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwvph\" (UniqueName: \"kubernetes.io/projected/4698540e-d270-4f76-8a8e-f7c3eea7b601-kube-api-access-kwvph\") pod \"4698540e-d270-4f76-8a8e-f7c3eea7b601\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " Oct 14 13:36:36.904050 master-1 kubenswrapper[4740]: I1014 13:36:36.904031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-svc\") pod \"4698540e-d270-4f76-8a8e-f7c3eea7b601\" (UID: \"4698540e-d270-4f76-8a8e-f7c3eea7b601\") " Oct 14 13:36:36.922616 master-1 kubenswrapper[4740]: I1014 13:36:36.922382 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"e230307d-3fb2-44c5-8259-563e509c9f68","Type":"ContainerStarted","Data":"379b6c835b4e8f13348bf16b176f146a071805fa9ab4a6f04530b02ffd6f3ad5"} Oct 14 13:36:36.922616 master-1 kubenswrapper[4740]: I1014 13:36:36.922533 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4698540e-d270-4f76-8a8e-f7c3eea7b601-kube-api-access-kwvph" (OuterVolumeSpecName: "kube-api-access-kwvph") pod "4698540e-d270-4f76-8a8e-f7c3eea7b601" (UID: "4698540e-d270-4f76-8a8e-f7c3eea7b601"). InnerVolumeSpecName "kube-api-access-kwvph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:36.929912 master-1 kubenswrapper[4740]: I1014 13:36:36.929827 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" event={"ID":"4698540e-d270-4f76-8a8e-f7c3eea7b601","Type":"ContainerDied","Data":"1c9d6187e77b6092cc8d666d7bbfb97e10b4c0710f17c52dcadebcefb94b46b9"} Oct 14 13:36:36.930004 master-1 kubenswrapper[4740]: I1014 13:36:36.929961 4740 scope.go:117] "RemoveContainer" containerID="6463cfb64f4b942a367a49a6e1764afdbfbfbb9ce54aa48ab514f1215ac22000" Oct 14 13:36:36.930246 master-1 kubenswrapper[4740]: I1014 13:36:36.930191 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c48f9bcf-6pg9l" Oct 14 13:36:36.941893 master-1 kubenswrapper[4740]: I1014 13:36:36.940379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"f653157f-4652-49a8-a3f6-0d952ce477f5","Type":"ContainerStarted","Data":"d57f310822dfad242d405f0401829360b9b65b5d4f224ec0f161ba22a84367dc"} Oct 14 13:36:36.949934 master-1 kubenswrapper[4740]: W1014 13:36:36.949873 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef882b5b_c0e5_47ca_ae59_ff311a14cdb5.slice/crio-040df63f4c18831478f95488323bfca405ec34e27bb57838b1e99fb19a9106bd WatchSource:0}: Error finding container 040df63f4c18831478f95488323bfca405ec34e27bb57838b1e99fb19a9106bd: Status 404 returned error can't find the container with id 040df63f4c18831478f95488323bfca405ec34e27bb57838b1e99fb19a9106bd Oct 14 13:36:36.971550 master-1 kubenswrapper[4740]: I1014 13:36:36.971350 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:36.971550 master-1 kubenswrapper[4740]: I1014 13:36:36.971392 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-798b8945b9-285k5"] Oct 14 13:36:36.971939 master-1 kubenswrapper[4740]: I1014 13:36:36.971874 4740 scope.go:117] "RemoveContainer" containerID="a684d1da92f0debc0941a2955ca26c91d985f0d30578376353a742194e8eacc6" Oct 14 13:36:36.974360 master-1 kubenswrapper[4740]: E1014 13:36:36.972284 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerName="init" Oct 14 13:36:36.974360 master-1 kubenswrapper[4740]: I1014 13:36:36.972301 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerName="init" Oct 14 13:36:36.974360 master-1 kubenswrapper[4740]: E1014 13:36:36.972317 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerName="dnsmasq-dns" Oct 14 13:36:36.974360 master-1 kubenswrapper[4740]: I1014 13:36:36.972324 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerName="dnsmasq-dns" Oct 14 13:36:36.974360 master-1 kubenswrapper[4740]: I1014 13:36:36.972499 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" containerName="dnsmasq-dns" Oct 14 13:36:36.974360 master-1 kubenswrapper[4740]: I1014 13:36:36.973397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:36.984438 master-1 kubenswrapper[4740]: I1014 13:36:36.981576 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4698540e-d270-4f76-8a8e-f7c3eea7b601" (UID: "4698540e-d270-4f76-8a8e-f7c3eea7b601"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:36.984438 master-1 kubenswrapper[4740]: I1014 13:36:36.982145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-config" (OuterVolumeSpecName: "config") pod "4698540e-d270-4f76-8a8e-f7c3eea7b601" (UID: "4698540e-d270-4f76-8a8e-f7c3eea7b601"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:36.984993 master-1 kubenswrapper[4740]: I1014 13:36:36.984802 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-675bcd49b4-pn7dg"] Oct 14 13:36:36.987046 master-1 kubenswrapper[4740]: I1014 13:36:36.987016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:36.988351 master-1 kubenswrapper[4740]: I1014 13:36:36.987836 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4698540e-d270-4f76-8a8e-f7c3eea7b601" (UID: "4698540e-d270-4f76-8a8e-f7c3eea7b601"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:36.992872 master-1 kubenswrapper[4740]: I1014 13:36:36.992801 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:36.994375 master-1 kubenswrapper[4740]: I1014 13:36:36.994346 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Oct 14 13:36:36.994614 master-1 kubenswrapper[4740]: I1014 13:36:36.994595 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Oct 14 13:36:36.994742 master-1 kubenswrapper[4740]: I1014 13:36:36.994721 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Oct 14 13:36:36.994902 master-1 kubenswrapper[4740]: I1014 13:36:36.994869 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Oct 14 13:36:37.007723 master-1 kubenswrapper[4740]: I1014 13:36:37.007686 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:37.007723 master-1 kubenswrapper[4740]: I1014 13:36:37.007731 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwvph\" (UniqueName: \"kubernetes.io/projected/4698540e-d270-4f76-8a8e-f7c3eea7b601-kube-api-access-kwvph\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:37.007723 master-1 kubenswrapper[4740]: I1014 13:36:37.007746 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:37.008115 master-1 kubenswrapper[4740]: I1014 13:36:37.007757 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:37.015004 master-1 kubenswrapper[4740]: I1014 13:36:37.014644 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4698540e-d270-4f76-8a8e-f7c3eea7b601" (UID: "4698540e-d270-4f76-8a8e-f7c3eea7b601"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:37.017537 master-1 kubenswrapper[4740]: I1014 13:36:37.017502 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4698540e-d270-4f76-8a8e-f7c3eea7b601" (UID: "4698540e-d270-4f76-8a8e-f7c3eea7b601"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:37.110341 master-1 kubenswrapper[4740]: I1014 13:36:37.110285 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-merged\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110479 master-1 kubenswrapper[4740]: I1014 13:36:37.110344 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-custom\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110479 master-1 kubenswrapper[4740]: I1014 13:36:37.110380 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-swift-storage-0\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.110479 master-1 kubenswrapper[4740]: I1014 13:36:37.110401 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-combined-ca-bundle\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110479 master-1 kubenswrapper[4740]: I1014 13:36:37.110422 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-sb\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.110479 master-1 kubenswrapper[4740]: I1014 13:36:37.110469 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcddq\" (UniqueName: \"kubernetes.io/projected/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-kube-api-access-xcddq\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-etc-podinfo\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110538 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-logs\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110562 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-scripts\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110581 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-nb\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-config\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.110651 master-1 kubenswrapper[4740]: I1014 13:36:37.110624 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z77p2\" (UniqueName: \"kubernetes.io/projected/83a644d5-c439-4938-8afb-e25b58786ea3-kube-api-access-z77p2\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.110850 master-1 kubenswrapper[4740]: I1014 13:36:37.110656 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-svc\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.110850 master-1 kubenswrapper[4740]: I1014 13:36:37.110706 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:37.110850 master-1 kubenswrapper[4740]: I1014 13:36:37.110718 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4698540e-d270-4f76-8a8e-f7c3eea7b601-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:37.185940 master-1 kubenswrapper[4740]: I1014 13:36:37.185888 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798b8945b9-285k5"] Oct 14 13:36:37.195247 master-1 kubenswrapper[4740]: I1014 13:36:37.195174 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-675bcd49b4-pn7dg"] Oct 14 13:36:37.211941 master-1 kubenswrapper[4740]: I1014 13:36:37.211887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-sb\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.211970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.211996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcddq\" (UniqueName: \"kubernetes.io/projected/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-kube-api-access-xcddq\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212020 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-etc-podinfo\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-logs\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212068 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-scripts\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-nb\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-config\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212130 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z77p2\" (UniqueName: \"kubernetes.io/projected/83a644d5-c439-4938-8afb-e25b58786ea3-kube-api-access-z77p2\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212164 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-svc\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.212196 master-1 kubenswrapper[4740]: I1014 13:36:37.212192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-merged\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212645 master-1 kubenswrapper[4740]: I1014 13:36:37.212210 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-custom\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.212645 master-1 kubenswrapper[4740]: I1014 13:36:37.212252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-swift-storage-0\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.212645 master-1 kubenswrapper[4740]: I1014 13:36:37.212269 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-combined-ca-bundle\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.213546 master-1 kubenswrapper[4740]: I1014 13:36:37.213498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-sb\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.215041 master-1 kubenswrapper[4740]: I1014 13:36:37.214877 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-merged\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.215041 master-1 kubenswrapper[4740]: I1014 13:36:37.214971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-logs\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.215661 master-1 kubenswrapper[4740]: I1014 13:36:37.215602 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-combined-ca-bundle\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.215661 master-1 kubenswrapper[4740]: I1014 13:36:37.215615 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-nb\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.216886 master-1 kubenswrapper[4740]: I1014 13:36:37.216822 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-swift-storage-0\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.217532 master-1 kubenswrapper[4740]: I1014 13:36:37.217491 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-svc\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.218644 master-1 kubenswrapper[4740]: I1014 13:36:37.218555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-config\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.218644 master-1 kubenswrapper[4740]: I1014 13:36:37.218564 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-etc-podinfo\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.218644 master-1 kubenswrapper[4740]: I1014 13:36:37.218616 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.218797 master-1 kubenswrapper[4740]: I1014 13:36:37.218674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-scripts\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.219946 master-1 kubenswrapper[4740]: I1014 13:36:37.219876 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-custom\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.260349 master-1 kubenswrapper[4740]: I1014 13:36:37.257752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcddq\" (UniqueName: \"kubernetes.io/projected/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-kube-api-access-xcddq\") pod \"ironic-675bcd49b4-pn7dg\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.313632 master-1 kubenswrapper[4740]: I1014 13:36:37.313523 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:37.351676 master-1 kubenswrapper[4740]: I1014 13:36:37.351588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z77p2\" (UniqueName: \"kubernetes.io/projected/83a644d5-c439-4938-8afb-e25b58786ea3-kube-api-access-z77p2\") pod \"dnsmasq-dns-798b8945b9-285k5\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.405164 master-1 kubenswrapper[4740]: I1014 13:36:37.405034 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64c48f9bcf-6pg9l"] Oct 14 13:36:37.419306 master-1 kubenswrapper[4740]: I1014 13:36:37.419182 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64c48f9bcf-6pg9l"] Oct 14 13:36:37.618344 master-1 kubenswrapper[4740]: I1014 13:36:37.609002 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:37.813178 master-1 kubenswrapper[4740]: I1014 13:36:37.813079 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-675bcd49b4-pn7dg"] Oct 14 13:36:37.956129 master-1 kubenswrapper[4740]: I1014 13:36:37.956022 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"e84a47ae-f765-4b20-b59c-958d505a497d","Type":"ContainerStarted","Data":"5491b356e9b8a3b31db83e3db99fdf812310e54e8e2bc183c38c97162ac53039"} Oct 14 13:36:37.962718 master-1 kubenswrapper[4740]: I1014 13:36:37.962631 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"e230307d-3fb2-44c5-8259-563e509c9f68","Type":"ContainerStarted","Data":"fd21787fe173e7d31edd4b3c041226299c7a231183c4f3be230b32205cea12e3"} Oct 14 13:36:37.968094 master-1 kubenswrapper[4740]: I1014 13:36:37.968047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerStarted","Data":"32e2694c27e01f7d295452495678c8612a59e92c9e4958db99ac11687735b6a0"} Oct 14 13:36:37.981204 master-1 kubenswrapper[4740]: I1014 13:36:37.981125 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5","Type":"ContainerStarted","Data":"040df63f4c18831478f95488323bfca405ec34e27bb57838b1e99fb19a9106bd"} Oct 14 13:36:38.052158 master-1 kubenswrapper[4740]: I1014 13:36:38.051818 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-46645-default-external-api-0" podStartSLOduration=11.05179941 podStartE2EDuration="11.05179941s" podCreationTimestamp="2025-10-14 13:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:38.022648202 +0000 UTC m=+1823.832937541" watchObservedRunningTime="2025-10-14 13:36:38.05179941 +0000 UTC m=+1823.862088739" Oct 14 13:36:38.180405 master-1 kubenswrapper[4740]: W1014 13:36:38.180307 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83a644d5_c439_4938_8afb_e25b58786ea3.slice/crio-8c472fa5bb1861843e0a163d32ca27d3f223f6b0dcb822263486945743d501ba WatchSource:0}: Error finding container 8c472fa5bb1861843e0a163d32ca27d3f223f6b0dcb822263486945743d501ba: Status 404 returned error can't find the container with id 8c472fa5bb1861843e0a163d32ca27d3f223f6b0dcb822263486945743d501ba Oct 14 13:36:38.203345 master-1 kubenswrapper[4740]: I1014 13:36:38.203189 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798b8945b9-285k5"] Oct 14 13:36:38.945943 master-1 kubenswrapper[4740]: I1014 13:36:38.945888 4740 scope.go:117] "RemoveContainer" containerID="1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833" Oct 14 13:36:38.969571 master-1 kubenswrapper[4740]: I1014 13:36:38.969514 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4698540e-d270-4f76-8a8e-f7c3eea7b601" path="/var/lib/kubelet/pods/4698540e-d270-4f76-8a8e-f7c3eea7b601/volumes" Oct 14 13:36:39.000036 master-1 kubenswrapper[4740]: I1014 13:36:38.998492 4740 generic.go:334] "Generic (PLEG): container finished" podID="83a644d5-c439-4938-8afb-e25b58786ea3" containerID="4f18119303bbd765ff611c77fcf9646c1aea81b4054c9b43a4c67a0362a165f6" exitCode=0 Oct 14 13:36:39.000036 master-1 kubenswrapper[4740]: I1014 13:36:38.998567 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798b8945b9-285k5" event={"ID":"83a644d5-c439-4938-8afb-e25b58786ea3","Type":"ContainerDied","Data":"4f18119303bbd765ff611c77fcf9646c1aea81b4054c9b43a4c67a0362a165f6"} Oct 14 13:36:39.000036 master-1 kubenswrapper[4740]: I1014 13:36:38.998593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798b8945b9-285k5" event={"ID":"83a644d5-c439-4938-8afb-e25b58786ea3","Type":"ContainerStarted","Data":"8c472fa5bb1861843e0a163d32ca27d3f223f6b0dcb822263486945743d501ba"} Oct 14 13:36:39.005863 master-1 kubenswrapper[4740]: I1014 13:36:39.005810 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5","Type":"ContainerStarted","Data":"9ba4a1cb07ba653ec933cd08278473e338b17d6c775d6a8945fd1b50ac774fdf"} Oct 14 13:36:39.005936 master-1 kubenswrapper[4740]: I1014 13:36:39.005867 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5","Type":"ContainerStarted","Data":"eec9e90a49fe748a63927a0b538d1fbd123ea7fe6b9d177dcf595bef2c2b920b"} Oct 14 13:36:39.009762 master-1 kubenswrapper[4740]: I1014 13:36:39.008353 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"e84a47ae-f765-4b20-b59c-958d505a497d","Type":"ContainerStarted","Data":"072b8c2e0bc84670b92afd25d806a41a9ec21a742f1dd50663d060c3d4dedc89"} Oct 14 13:36:39.011782 master-1 kubenswrapper[4740]: I1014 13:36:39.011749 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"f653157f-4652-49a8-a3f6-0d952ce477f5","Type":"ContainerStarted","Data":"9bd0796ae392f14401e1969973c38e4eb770d8ae1123996b3461d50275e0f124"} Oct 14 13:36:39.083335 master-1 kubenswrapper[4740]: I1014 13:36:39.080657 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-backup-0" podStartSLOduration=2.811148464 podStartE2EDuration="4.080618909s" podCreationTimestamp="2025-10-14 13:36:35 +0000 UTC" firstStartedPulling="2025-10-14 13:36:36.9720713 +0000 UTC m=+1822.782360629" lastFinishedPulling="2025-10-14 13:36:38.241541745 +0000 UTC m=+1824.051831074" observedRunningTime="2025-10-14 13:36:39.076597833 +0000 UTC m=+1824.886887152" watchObservedRunningTime="2025-10-14 13:36:39.080618909 +0000 UTC m=+1824.890908238" Oct 14 13:36:39.939017 master-1 kubenswrapper[4740]: I1014 13:36:39.938932 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:36:39.940849 master-1 kubenswrapper[4740]: I1014 13:36:39.940747 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:39.995432 master-1 kubenswrapper[4740]: I1014 13:36:39.993445 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-46645-default-internal-config-data" Oct 14 13:36:40.007271 master-1 kubenswrapper[4740]: I1014 13:36:40.007211 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:36:40.031627 master-1 kubenswrapper[4740]: I1014 13:36:40.031550 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.031627 master-1 kubenswrapper[4740]: I1014 13:36:40.031620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-combined-ca-bundle\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.031979 master-1 kubenswrapper[4740]: I1014 13:36:40.031754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-scripts\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.031979 master-1 kubenswrapper[4740]: I1014 13:36:40.031793 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpcp\" (UniqueName: \"kubernetes.io/projected/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-kube-api-access-mbpcp\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.031979 master-1 kubenswrapper[4740]: I1014 13:36:40.031853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-httpd-run\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.031979 master-1 kubenswrapper[4740]: I1014 13:36:40.031888 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-logs\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.031979 master-1 kubenswrapper[4740]: I1014 13:36:40.031941 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-config-data\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.051557 master-1 kubenswrapper[4740]: I1014 13:36:40.051489 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerStarted","Data":"28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc"} Oct 14 13:36:40.055607 master-1 kubenswrapper[4740]: I1014 13:36:40.055557 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"e84a47ae-f765-4b20-b59c-958d505a497d","Type":"ContainerStarted","Data":"3caa94129e659ea2a21096e03012305e13d4e3217da4b3813c8655e7dfa60d17"} Oct 14 13:36:40.056646 master-1 kubenswrapper[4740]: I1014 13:36:40.056570 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-46645-api-2" Oct 14 13:36:40.059598 master-1 kubenswrapper[4740]: I1014 13:36:40.059544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"f653157f-4652-49a8-a3f6-0d952ce477f5","Type":"ContainerStarted","Data":"35781c4c67292140f74c57106ce369c8ec38106a791f24dc371e1df398352859"} Oct 14 13:36:40.061673 master-1 kubenswrapper[4740]: I1014 13:36:40.061536 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798b8945b9-285k5" event={"ID":"83a644d5-c439-4938-8afb-e25b58786ea3","Type":"ContainerStarted","Data":"42167b7e73b6c2ca8670a08edb5911efbde5d940a12054566a79b028a350a11c"} Oct 14 13:36:40.119607 master-1 kubenswrapper[4740]: I1014 13:36:40.119536 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-scheduler-0" podStartSLOduration=4.069519307 podStartE2EDuration="5.119517673s" podCreationTimestamp="2025-10-14 13:36:35 +0000 UTC" firstStartedPulling="2025-10-14 13:36:36.098870099 +0000 UTC m=+1821.909159428" lastFinishedPulling="2025-10-14 13:36:37.148868465 +0000 UTC m=+1822.959157794" observedRunningTime="2025-10-14 13:36:40.107480106 +0000 UTC m=+1825.917769445" watchObservedRunningTime="2025-10-14 13:36:40.119517673 +0000 UTC m=+1825.929807002" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133407 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-scripts\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133463 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpcp\" (UniqueName: \"kubernetes.io/projected/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-kube-api-access-mbpcp\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-httpd-run\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133545 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-logs\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133594 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-config-data\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.133646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-combined-ca-bundle\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.134015 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-httpd-run\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.134701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-logs\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.135745 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.135777 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/92058cb70342f8fc4137d0387239503103ba12b3a1ee5489530157139323bc4f/globalmount\"" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.162267 master-1 kubenswrapper[4740]: I1014 13:36:40.155858 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-config-data\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.178314 master-1 kubenswrapper[4740]: I1014 13:36:40.164539 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-combined-ca-bundle\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.221945 master-1 kubenswrapper[4740]: I1014 13:36:40.221735 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-scripts\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.226252 master-1 kubenswrapper[4740]: I1014 13:36:40.224651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpcp\" (UniqueName: \"kubernetes.io/projected/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-kube-api-access-mbpcp\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:40.246271 master-1 kubenswrapper[4740]: I1014 13:36:40.246174 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-798b8945b9-285k5" podStartSLOduration=4.246158087 podStartE2EDuration="4.246158087s" podCreationTimestamp="2025-10-14 13:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:40.225171825 +0000 UTC m=+1826.035461154" watchObservedRunningTime="2025-10-14 13:36:40.246158087 +0000 UTC m=+1826.056447416" Oct 14 13:36:40.246745 master-1 kubenswrapper[4740]: I1014 13:36:40.246679 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-api-2" podStartSLOduration=4.24667502 podStartE2EDuration="4.24667502s" podCreationTimestamp="2025-10-14 13:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:40.176982506 +0000 UTC m=+1825.987271835" watchObservedRunningTime="2025-10-14 13:36:40.24667502 +0000 UTC m=+1826.056964339" Oct 14 13:36:40.487859 master-1 kubenswrapper[4740]: I1014 13:36:40.487775 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:40.790960 master-1 kubenswrapper[4740]: I1014 13:36:40.790895 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:41.078366 master-1 kubenswrapper[4740]: I1014 13:36:41.078216 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc" exitCode=1 Oct 14 13:36:41.078859 master-1 kubenswrapper[4740]: I1014 13:36:41.078255 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc"} Oct 14 13:36:41.078859 master-1 kubenswrapper[4740]: I1014 13:36:41.078456 4740 scope.go:117] "RemoveContainer" containerID="1c7b94efa39d7670d32309a936c6fab8a72315bb6ae55fba2aca900975b1c833" Oct 14 13:36:41.079420 master-1 kubenswrapper[4740]: I1014 13:36:41.079384 4740 scope.go:117] "RemoveContainer" containerID="28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc" Oct 14 13:36:41.079813 master-1 kubenswrapper[4740]: E1014 13:36:41.079770 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:36:41.080886 master-1 kubenswrapper[4740]: I1014 13:36:41.080812 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:41.676922 master-1 kubenswrapper[4740]: I1014 13:36:41.676847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:41.840932 master-1 kubenswrapper[4740]: I1014 13:36:41.840787 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:42.138263 master-1 kubenswrapper[4740]: I1014 13:36:42.138118 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:43.102312 master-1 kubenswrapper[4740]: I1014 13:36:43.102166 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-46645-api-2" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-46645-api-log" containerID="cri-o://072b8c2e0bc84670b92afd25d806a41a9ec21a742f1dd50663d060c3d4dedc89" gracePeriod=30 Oct 14 13:36:43.102596 master-1 kubenswrapper[4740]: I1014 13:36:43.102436 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-46645-api-2" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-api" containerID="cri-o://3caa94129e659ea2a21096e03012305e13d4e3217da4b3813c8655e7dfa60d17" gracePeriod=30 Oct 14 13:36:44.113631 master-1 kubenswrapper[4740]: I1014 13:36:44.113583 4740 generic.go:334] "Generic (PLEG): container finished" podID="e84a47ae-f765-4b20-b59c-958d505a497d" containerID="3caa94129e659ea2a21096e03012305e13d4e3217da4b3813c8655e7dfa60d17" exitCode=0 Oct 14 13:36:44.113631 master-1 kubenswrapper[4740]: I1014 13:36:44.113618 4740 generic.go:334] "Generic (PLEG): container finished" podID="e84a47ae-f765-4b20-b59c-958d505a497d" containerID="072b8c2e0bc84670b92afd25d806a41a9ec21a742f1dd50663d060c3d4dedc89" exitCode=143 Oct 14 13:36:44.114101 master-1 kubenswrapper[4740]: I1014 13:36:44.113638 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"e84a47ae-f765-4b20-b59c-958d505a497d","Type":"ContainerDied","Data":"3caa94129e659ea2a21096e03012305e13d4e3217da4b3813c8655e7dfa60d17"} Oct 14 13:36:44.114101 master-1 kubenswrapper[4740]: I1014 13:36:44.113665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"e84a47ae-f765-4b20-b59c-958d505a497d","Type":"ContainerDied","Data":"072b8c2e0bc84670b92afd25d806a41a9ec21a742f1dd50663d060c3d4dedc89"} Oct 14 13:36:44.688207 master-1 kubenswrapper[4740]: I1014 13:36:44.688092 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:44.688207 master-1 kubenswrapper[4740]: I1014 13:36:44.688173 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:44.729372 master-1 kubenswrapper[4740]: I1014 13:36:44.727294 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:44.797254 master-1 kubenswrapper[4740]: I1014 13:36:44.795535 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:44.917455 master-1 kubenswrapper[4740]: I1014 13:36:44.917367 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:36:44.920922 master-1 kubenswrapper[4740]: I1014 13:36:44.920888 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.078113 master-1 kubenswrapper[4740]: I1014 13:36:45.078051 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfnpx\" (UniqueName: \"kubernetes.io/projected/e84a47ae-f765-4b20-b59c-958d505a497d-kube-api-access-nfnpx\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078113 master-1 kubenswrapper[4740]: I1014 13:36:45.078119 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078453 master-1 kubenswrapper[4740]: I1014 13:36:45.078174 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e84a47ae-f765-4b20-b59c-958d505a497d-etc-machine-id\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078453 master-1 kubenswrapper[4740]: I1014 13:36:45.078214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data-custom\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078453 master-1 kubenswrapper[4740]: I1014 13:36:45.078261 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-scripts\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078453 master-1 kubenswrapper[4740]: I1014 13:36:45.078350 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e84a47ae-f765-4b20-b59c-958d505a497d-logs\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078453 master-1 kubenswrapper[4740]: I1014 13:36:45.078416 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-combined-ca-bundle\") pod \"e84a47ae-f765-4b20-b59c-958d505a497d\" (UID: \"e84a47ae-f765-4b20-b59c-958d505a497d\") " Oct 14 13:36:45.078833 master-1 kubenswrapper[4740]: I1014 13:36:45.078787 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e84a47ae-f765-4b20-b59c-958d505a497d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:45.079463 master-1 kubenswrapper[4740]: I1014 13:36:45.079430 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84a47ae-f765-4b20-b59c-958d505a497d-logs" (OuterVolumeSpecName: "logs") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:36:45.082708 master-1 kubenswrapper[4740]: I1014 13:36:45.082671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-scripts" (OuterVolumeSpecName: "scripts") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:45.082785 master-1 kubenswrapper[4740]: I1014 13:36:45.082718 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84a47ae-f765-4b20-b59c-958d505a497d-kube-api-access-nfnpx" (OuterVolumeSpecName: "kube-api-access-nfnpx") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "kube-api-access-nfnpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:45.083833 master-1 kubenswrapper[4740]: I1014 13:36:45.083780 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:45.103818 master-1 kubenswrapper[4740]: I1014 13:36:45.103733 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:45.134258 master-1 kubenswrapper[4740]: I1014 13:36:45.133905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerStarted","Data":"9e476382f771003447f81b5bed083830c3ccd39f68b10a151feb8a5a647e6b5c"} Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.136434 4740 generic.go:334] "Generic (PLEG): container finished" podID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerID="8b058597047e7fe49ad18607b90259fa3da40b0c4c624680bec3ba26687bbf31" exitCode=0 Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.136494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerDied","Data":"8b058597047e7fe49ad18607b90259fa3da40b0c4c624680bec3ba26687bbf31"} Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.139787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"e84a47ae-f765-4b20-b59c-958d505a497d","Type":"ContainerDied","Data":"5491b356e9b8a3b31db83e3db99fdf812310e54e8e2bc183c38c97162ac53039"} Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.139847 4740 scope.go:117] "RemoveContainer" containerID="3caa94129e659ea2a21096e03012305e13d4e3217da4b3813c8655e7dfa60d17" Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.139967 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.140013 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:45.141408 master-1 kubenswrapper[4740]: I1014 13:36:45.140066 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:45.142310 master-1 kubenswrapper[4740]: I1014 13:36:45.141837 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data" (OuterVolumeSpecName: "config-data") pod "e84a47ae-f765-4b20-b59c-958d505a497d" (UID: "e84a47ae-f765-4b20-b59c-958d505a497d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181010 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181079 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e84a47ae-f765-4b20-b59c-958d505a497d-etc-machine-id\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181090 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181099 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181108 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e84a47ae-f765-4b20-b59c-958d505a497d-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181123 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e84a47ae-f765-4b20-b59c-958d505a497d-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.181131 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfnpx\" (UniqueName: \"kubernetes.io/projected/e84a47ae-f765-4b20-b59c-958d505a497d-kube-api-access-nfnpx\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:45.202723 master-1 kubenswrapper[4740]: I1014 13:36:45.192535 4740 scope.go:117] "RemoveContainer" containerID="072b8c2e0bc84670b92afd25d806a41a9ec21a742f1dd50663d060c3d4dedc89" Oct 14 13:36:45.507880 master-1 kubenswrapper[4740]: I1014 13:36:45.507827 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:45.515848 master-1 kubenswrapper[4740]: I1014 13:36:45.515806 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:45.566762 master-1 kubenswrapper[4740]: I1014 13:36:45.566629 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:45.567426 master-1 kubenswrapper[4740]: E1014 13:36:45.567053 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-api" Oct 14 13:36:45.567426 master-1 kubenswrapper[4740]: I1014 13:36:45.567071 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-api" Oct 14 13:36:45.567426 master-1 kubenswrapper[4740]: E1014 13:36:45.567120 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-46645-api-log" Oct 14 13:36:45.567426 master-1 kubenswrapper[4740]: I1014 13:36:45.567129 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-46645-api-log" Oct 14 13:36:45.567863 master-1 kubenswrapper[4740]: I1014 13:36:45.567507 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-api" Oct 14 13:36:45.567863 master-1 kubenswrapper[4740]: I1014 13:36:45.567549 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" containerName="cinder-46645-api-log" Oct 14 13:36:45.570095 master-1 kubenswrapper[4740]: I1014 13:36:45.570065 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.584408 master-1 kubenswrapper[4740]: I1014 13:36:45.574604 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 14 13:36:45.584408 master-1 kubenswrapper[4740]: I1014 13:36:45.575036 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 14 13:36:45.584408 master-1 kubenswrapper[4740]: I1014 13:36:45.575286 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-api-config-data" Oct 14 13:36:45.601837 master-1 kubenswrapper[4740]: I1014 13:36:45.601719 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:45.692613 master-1 kubenswrapper[4740]: I1014 13:36:45.692532 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-combined-ca-bundle\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.692946 master-1 kubenswrapper[4740]: I1014 13:36:45.692855 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-config-data\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.692946 master-1 kubenswrapper[4740]: I1014 13:36:45.692890 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-config-data-custom\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.692946 master-1 kubenswrapper[4740]: I1014 13:36:45.692910 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-scripts\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.692946 master-1 kubenswrapper[4740]: I1014 13:36:45.692927 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-etc-machine-id\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.693615 master-1 kubenswrapper[4740]: I1014 13:36:45.693549 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-internal-tls-certs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.693726 master-1 kubenswrapper[4740]: I1014 13:36:45.693700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-public-tls-certs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.693817 master-1 kubenswrapper[4740]: I1014 13:36:45.693783 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-logs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.694150 master-1 kubenswrapper[4740]: I1014 13:36:45.694091 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ww6r\" (UniqueName: \"kubernetes.io/projected/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-kube-api-access-8ww6r\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.795593 master-1 kubenswrapper[4740]: I1014 13:36:45.795548 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-internal-tls-certs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.795878 master-1 kubenswrapper[4740]: I1014 13:36:45.795853 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-public-tls-certs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.796023 master-1 kubenswrapper[4740]: I1014 13:36:45.796004 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-logs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.796196 master-1 kubenswrapper[4740]: I1014 13:36:45.796174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ww6r\" (UniqueName: \"kubernetes.io/projected/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-kube-api-access-8ww6r\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.796335 master-1 kubenswrapper[4740]: I1014 13:36:45.796318 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-combined-ca-bundle\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.796467 master-1 kubenswrapper[4740]: I1014 13:36:45.796450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-config-data\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.796576 master-1 kubenswrapper[4740]: I1014 13:36:45.796558 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-config-data-custom\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.797281 master-1 kubenswrapper[4740]: I1014 13:36:45.797259 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-scripts\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.797470 master-1 kubenswrapper[4740]: I1014 13:36:45.797451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-etc-machine-id\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.797636 master-1 kubenswrapper[4740]: I1014 13:36:45.797585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-etc-machine-id\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.797734 master-1 kubenswrapper[4740]: I1014 13:36:45.796590 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-logs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.799641 master-1 kubenswrapper[4740]: I1014 13:36:45.799585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-internal-tls-certs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.801619 master-1 kubenswrapper[4740]: I1014 13:36:45.801493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-config-data\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.801745 master-1 kubenswrapper[4740]: I1014 13:36:45.801703 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:45.803193 master-1 kubenswrapper[4740]: I1014 13:36:45.802389 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-config-data-custom\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.803193 master-1 kubenswrapper[4740]: I1014 13:36:45.802398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-combined-ca-bundle\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.814078 master-1 kubenswrapper[4740]: I1014 13:36:45.814033 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-scripts\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.817923 master-1 kubenswrapper[4740]: I1014 13:36:45.817782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-public-tls-certs\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.821818 master-1 kubenswrapper[4740]: I1014 13:36:45.821770 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ww6r\" (UniqueName: \"kubernetes.io/projected/f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf-kube-api-access-8ww6r\") pod \"cinder-46645-api-2\" (UID: \"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf\") " pod="openstack/cinder-46645-api-2" Oct 14 13:36:45.865174 master-1 kubenswrapper[4740]: I1014 13:36:45.865084 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:45.985622 master-1 kubenswrapper[4740]: I1014 13:36:45.983878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-api-2" Oct 14 13:36:46.054158 master-1 kubenswrapper[4740]: I1014 13:36:46.054001 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:46.121799 master-1 kubenswrapper[4740]: I1014 13:36:46.121707 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:46.156310 master-1 kubenswrapper[4740]: I1014 13:36:46.155370 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerStarted","Data":"364a84ee9bc58c48f672391b8539f8003048bb79ffba9a01103ff45c1e9d5b2c"} Oct 14 13:36:46.163821 master-1 kubenswrapper[4740]: I1014 13:36:46.163663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerStarted","Data":"0ea6b7ebb9f3754225faa51f61c305e498d797ffce16bac0b4921cb8e587bfb3"} Oct 14 13:36:46.163821 master-1 kubenswrapper[4740]: I1014 13:36:46.163705 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerStarted","Data":"fe568b18722cd611790e4cd90f989f5c0c3fb201de0d351de12d0f4deddaac5c"} Oct 14 13:36:46.163821 master-1 kubenswrapper[4740]: I1014 13:36:46.163733 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:46.170045 master-1 kubenswrapper[4740]: I1014 13:36:46.169766 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-46645-backup-0" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="cinder-backup" containerID="cri-o://eec9e90a49fe748a63927a0b538d1fbd123ea7fe6b9d177dcf595bef2c2b920b" gracePeriod=30 Oct 14 13:36:46.170045 master-1 kubenswrapper[4740]: I1014 13:36:46.170012 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-46645-backup-0" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="probe" containerID="cri-o://9ba4a1cb07ba653ec933cd08278473e338b17d6c775d6a8945fd1b50ac774fdf" gracePeriod=30 Oct 14 13:36:46.170938 master-1 kubenswrapper[4740]: I1014 13:36:46.170555 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-46645-scheduler-0" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="cinder-scheduler" containerID="cri-o://9bd0796ae392f14401e1969973c38e4eb770d8ae1123996b3461d50275e0f124" gracePeriod=30 Oct 14 13:36:46.170938 master-1 kubenswrapper[4740]: I1014 13:36:46.170711 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-46645-scheduler-0" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="probe" containerID="cri-o://35781c4c67292140f74c57106ce369c8ec38106a791f24dc371e1df398352859" gracePeriod=30 Oct 14 13:36:46.208797 master-1 kubenswrapper[4740]: I1014 13:36:46.208470 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-675bcd49b4-pn7dg" podStartSLOduration=3.948636723 podStartE2EDuration="10.208195188s" podCreationTimestamp="2025-10-14 13:36:36 +0000 UTC" firstStartedPulling="2025-10-14 13:36:37.87787689 +0000 UTC m=+1823.688166219" lastFinishedPulling="2025-10-14 13:36:44.137435355 +0000 UTC m=+1829.947724684" observedRunningTime="2025-10-14 13:36:46.201300536 +0000 UTC m=+1832.011589865" watchObservedRunningTime="2025-10-14 13:36:46.208195188 +0000 UTC m=+1832.018484517" Oct 14 13:36:46.493615 master-1 kubenswrapper[4740]: I1014 13:36:46.485446 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-api-2"] Oct 14 13:36:46.955351 master-1 kubenswrapper[4740]: I1014 13:36:46.955215 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84a47ae-f765-4b20-b59c-958d505a497d" path="/var/lib/kubelet/pods/e84a47ae-f765-4b20-b59c-958d505a497d/volumes" Oct 14 13:36:47.217345 master-1 kubenswrapper[4740]: I1014 13:36:47.216191 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerID="9ba4a1cb07ba653ec933cd08278473e338b17d6c775d6a8945fd1b50ac774fdf" exitCode=0 Oct 14 13:36:47.217345 master-1 kubenswrapper[4740]: I1014 13:36:47.216317 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5","Type":"ContainerDied","Data":"9ba4a1cb07ba653ec933cd08278473e338b17d6c775d6a8945fd1b50ac774fdf"} Oct 14 13:36:47.242766 master-1 kubenswrapper[4740]: I1014 13:36:47.242716 4740 generic.go:334] "Generic (PLEG): container finished" podID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerID="35781c4c67292140f74c57106ce369c8ec38106a791f24dc371e1df398352859" exitCode=0 Oct 14 13:36:47.243721 master-1 kubenswrapper[4740]: I1014 13:36:47.243689 4740 generic.go:334] "Generic (PLEG): container finished" podID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerID="9bd0796ae392f14401e1969973c38e4eb770d8ae1123996b3461d50275e0f124" exitCode=0 Oct 14 13:36:47.243997 master-1 kubenswrapper[4740]: I1014 13:36:47.243345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"f653157f-4652-49a8-a3f6-0d952ce477f5","Type":"ContainerDied","Data":"35781c4c67292140f74c57106ce369c8ec38106a791f24dc371e1df398352859"} Oct 14 13:36:47.244137 master-1 kubenswrapper[4740]: I1014 13:36:47.244107 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"f653157f-4652-49a8-a3f6-0d952ce477f5","Type":"ContainerDied","Data":"9bd0796ae392f14401e1969973c38e4eb770d8ae1123996b3461d50275e0f124"} Oct 14 13:36:47.251007 master-1 kubenswrapper[4740]: I1014 13:36:47.250944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf","Type":"ContainerStarted","Data":"ae3c7a57b3739e8018602afcf78ce00686d843a0fec152b742e57c24028c0bc4"} Oct 14 13:36:47.252731 master-1 kubenswrapper[4740]: I1014 13:36:47.252691 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerID="ef3a459254d17ecf084a465e7492356555ebda1d4fae1d4d0892bf2ec84bdf7a" exitCode=1 Oct 14 13:36:47.253500 master-1 kubenswrapper[4740]: I1014 13:36:47.253078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerDied","Data":"ef3a459254d17ecf084a465e7492356555ebda1d4fae1d4d0892bf2ec84bdf7a"} Oct 14 13:36:47.253736 master-1 kubenswrapper[4740]: I1014 13:36:47.253699 4740 scope.go:117] "RemoveContainer" containerID="ef3a459254d17ecf084a465e7492356555ebda1d4fae1d4d0892bf2ec84bdf7a" Oct 14 13:36:47.498628 master-1 kubenswrapper[4740]: I1014 13:36:47.498582 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:47.498813 master-1 kubenswrapper[4740]: I1014 13:36:47.498670 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:36:47.611574 master-1 kubenswrapper[4740]: I1014 13:36:47.611518 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:36:47.652667 master-1 kubenswrapper[4740]: I1014 13:36:47.652607 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:36:47.840532 master-1 kubenswrapper[4740]: I1014 13:36:47.840473 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5755976884-m54wt" Oct 14 13:36:48.125255 master-1 kubenswrapper[4740]: I1014 13:36:48.123113 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:48.270925 master-1 kubenswrapper[4740]: I1014 13:36:48.270842 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" exitCode=1 Oct 14 13:36:48.270925 master-1 kubenswrapper[4740]: I1014 13:36:48.270931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerDied","Data":"08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb"} Oct 14 13:36:48.271674 master-1 kubenswrapper[4740]: I1014 13:36:48.270981 4740 scope.go:117] "RemoveContainer" containerID="ef3a459254d17ecf084a465e7492356555ebda1d4fae1d4d0892bf2ec84bdf7a" Oct 14 13:36:48.272254 master-1 kubenswrapper[4740]: I1014 13:36:48.271975 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:36:48.272341 master-1 kubenswrapper[4740]: E1014 13:36:48.272309 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:36:48.275216 master-1 kubenswrapper[4740]: I1014 13:36:48.275105 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerID="eec9e90a49fe748a63927a0b538d1fbd123ea7fe6b9d177dcf595bef2c2b920b" exitCode=0 Oct 14 13:36:48.275216 master-1 kubenswrapper[4740]: I1014 13:36:48.275159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5","Type":"ContainerDied","Data":"eec9e90a49fe748a63927a0b538d1fbd123ea7fe6b9d177dcf595bef2c2b920b"} Oct 14 13:36:48.276788 master-1 kubenswrapper[4740]: I1014 13:36:48.276741 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"f653157f-4652-49a8-a3f6-0d952ce477f5","Type":"ContainerDied","Data":"d57f310822dfad242d405f0401829360b9b65b5d4f224ec0f161ba22a84367dc"} Oct 14 13:36:48.276940 master-1 kubenswrapper[4740]: I1014 13:36:48.276828 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:48.278157 master-1 kubenswrapper[4740]: I1014 13:36:48.278124 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data-custom\") pod \"f653157f-4652-49a8-a3f6-0d952ce477f5\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " Oct 14 13:36:48.278455 master-1 kubenswrapper[4740]: I1014 13:36:48.278421 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqg7l\" (UniqueName: \"kubernetes.io/projected/f653157f-4652-49a8-a3f6-0d952ce477f5-kube-api-access-xqg7l\") pod \"f653157f-4652-49a8-a3f6-0d952ce477f5\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " Oct 14 13:36:48.278499 master-1 kubenswrapper[4740]: I1014 13:36:48.278463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-combined-ca-bundle\") pod \"f653157f-4652-49a8-a3f6-0d952ce477f5\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " Oct 14 13:36:48.278499 master-1 kubenswrapper[4740]: I1014 13:36:48.278494 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f653157f-4652-49a8-a3f6-0d952ce477f5-etc-machine-id\") pod \"f653157f-4652-49a8-a3f6-0d952ce477f5\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " Oct 14 13:36:48.278568 master-1 kubenswrapper[4740]: I1014 13:36:48.278522 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-scripts\") pod \"f653157f-4652-49a8-a3f6-0d952ce477f5\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " Oct 14 13:36:48.278710 master-1 kubenswrapper[4740]: I1014 13:36:48.278673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data\") pod \"f653157f-4652-49a8-a3f6-0d952ce477f5\" (UID: \"f653157f-4652-49a8-a3f6-0d952ce477f5\") " Oct 14 13:36:48.279772 master-1 kubenswrapper[4740]: I1014 13:36:48.279715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f653157f-4652-49a8-a3f6-0d952ce477f5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f653157f-4652-49a8-a3f6-0d952ce477f5" (UID: "f653157f-4652-49a8-a3f6-0d952ce477f5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:48.283400 master-1 kubenswrapper[4740]: I1014 13:36:48.283345 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf","Type":"ContainerStarted","Data":"449b8a18be63015ede0cef0cae8ff31bc256e0986dabfb9e3c0c4f92bb4c39cd"} Oct 14 13:36:48.283490 master-1 kubenswrapper[4740]: I1014 13:36:48.283406 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-api-2" event={"ID":"f2fa7082-8a5b-4e20-8e84-fbe947fbf1bf","Type":"ContainerStarted","Data":"fd048442ba3ad1cef71256d17570fbe2817204b15ff127d83b0393e6ddc0865e"} Oct 14 13:36:48.283490 master-1 kubenswrapper[4740]: I1014 13:36:48.283436 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-46645-api-2" Oct 14 13:36:48.283490 master-1 kubenswrapper[4740]: I1014 13:36:48.283437 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f653157f-4652-49a8-a3f6-0d952ce477f5-kube-api-access-xqg7l" (OuterVolumeSpecName: "kube-api-access-xqg7l") pod "f653157f-4652-49a8-a3f6-0d952ce477f5" (UID: "f653157f-4652-49a8-a3f6-0d952ce477f5"). InnerVolumeSpecName "kube-api-access-xqg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:48.283688 master-1 kubenswrapper[4740]: I1014 13:36:48.283654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-scripts" (OuterVolumeSpecName: "scripts") pod "f653157f-4652-49a8-a3f6-0d952ce477f5" (UID: "f653157f-4652-49a8-a3f6-0d952ce477f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:48.284221 master-1 kubenswrapper[4740]: I1014 13:36:48.284153 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f653157f-4652-49a8-a3f6-0d952ce477f5" (UID: "f653157f-4652-49a8-a3f6-0d952ce477f5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:48.324106 master-1 kubenswrapper[4740]: I1014 13:36:48.323974 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f653157f-4652-49a8-a3f6-0d952ce477f5" (UID: "f653157f-4652-49a8-a3f6-0d952ce477f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:48.397550 master-1 kubenswrapper[4740]: I1014 13:36:48.392410 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqg7l\" (UniqueName: \"kubernetes.io/projected/f653157f-4652-49a8-a3f6-0d952ce477f5-kube-api-access-xqg7l\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:48.397550 master-1 kubenswrapper[4740]: I1014 13:36:48.392480 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:48.397550 master-1 kubenswrapper[4740]: I1014 13:36:48.392498 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:48.397550 master-1 kubenswrapper[4740]: I1014 13:36:48.392518 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f653157f-4652-49a8-a3f6-0d952ce477f5-etc-machine-id\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:48.397550 master-1 kubenswrapper[4740]: I1014 13:36:48.392537 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:48.413437 master-1 kubenswrapper[4740]: I1014 13:36:48.413385 4740 scope.go:117] "RemoveContainer" containerID="35781c4c67292140f74c57106ce369c8ec38106a791f24dc371e1df398352859" Oct 14 13:36:48.452304 master-1 kubenswrapper[4740]: I1014 13:36:48.450663 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data" (OuterVolumeSpecName: "config-data") pod "f653157f-4652-49a8-a3f6-0d952ce477f5" (UID: "f653157f-4652-49a8-a3f6-0d952ce477f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:48.452304 master-1 kubenswrapper[4740]: I1014 13:36:48.450844 4740 scope.go:117] "RemoveContainer" containerID="9bd0796ae392f14401e1969973c38e4eb770d8ae1123996b3461d50275e0f124" Oct 14 13:36:48.456998 master-1 kubenswrapper[4740]: I1014 13:36:48.456863 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787cbbf4dc-666ws"] Oct 14 13:36:48.457551 master-1 kubenswrapper[4740]: I1014 13:36:48.457261 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" containerName="dnsmasq-dns" containerID="cri-o://03d227c99e07b3086981b44d02b6e02ff2e9d58461f6d5ba85fc4e712af90b49" gracePeriod=10 Oct 14 13:36:48.494710 master-1 kubenswrapper[4740]: I1014 13:36:48.494649 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f653157f-4652-49a8-a3f6-0d952ce477f5-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.218069 master-1 kubenswrapper[4740]: I1014 13:36:49.217920 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-api-2" podStartSLOduration=4.217898554 podStartE2EDuration="4.217898554s" podCreationTimestamp="2025-10-14 13:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:49.210996802 +0000 UTC m=+1835.021286171" watchObservedRunningTime="2025-10-14 13:36:49.217898554 +0000 UTC m=+1835.028187883" Oct 14 13:36:49.305331 master-1 kubenswrapper[4740]: I1014 13:36:49.304611 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:36:49.305331 master-1 kubenswrapper[4740]: E1014 13:36:49.304843 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:36:49.307670 master-1 kubenswrapper[4740]: I1014 13:36:49.307591 4740 generic.go:334] "Generic (PLEG): container finished" podID="4864df54-8895-424b-85df-f8ce3bc5001e" containerID="03d227c99e07b3086981b44d02b6e02ff2e9d58461f6d5ba85fc4e712af90b49" exitCode=0 Oct 14 13:36:49.307804 master-1 kubenswrapper[4740]: I1014 13:36:49.307752 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" event={"ID":"4864df54-8895-424b-85df-f8ce3bc5001e","Type":"ContainerDied","Data":"03d227c99e07b3086981b44d02b6e02ff2e9d58461f6d5ba85fc4e712af90b49"} Oct 14 13:36:49.744757 master-1 kubenswrapper[4740]: I1014 13:36:49.744701 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:36:49.751219 master-1 kubenswrapper[4740]: I1014 13:36:49.751159 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:49.831740 master-1 kubenswrapper[4740]: I1014 13:36:49.831632 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-config\") pod \"4864df54-8895-424b-85df-f8ce3bc5001e\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " Oct 14 13:36:49.832027 master-1 kubenswrapper[4740]: I1014 13:36:49.831754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-sb\") pod \"4864df54-8895-424b-85df-f8ce3bc5001e\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " Oct 14 13:36:49.832027 master-1 kubenswrapper[4740]: I1014 13:36:49.831828 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-scripts\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832027 master-1 kubenswrapper[4740]: I1014 13:36:49.831876 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-run\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832027 master-1 kubenswrapper[4740]: I1014 13:36:49.831970 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-iscsi\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832333 master-1 kubenswrapper[4740]: I1014 13:36:49.832025 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832333 master-1 kubenswrapper[4740]: I1014 13:36:49.832076 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-machine-id\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832333 master-1 kubenswrapper[4740]: I1014 13:36:49.832091 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-run" (OuterVolumeSpecName: "run") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.832333 master-1 kubenswrapper[4740]: I1014 13:36:49.832126 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-cinder\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832333 master-1 kubenswrapper[4740]: I1014 13:36:49.832200 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.832333 master-1 kubenswrapper[4740]: I1014 13:36:49.832299 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832338 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr2lx\" (UniqueName: \"kubernetes.io/projected/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-kube-api-access-rr2lx\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832404 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-brick\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832451 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data-custom\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-dev\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832554 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-nvme\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-lib-modules\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832642 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-nb\") pod \"4864df54-8895-424b-85df-f8ce3bc5001e\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " Oct 14 13:36:49.832728 master-1 kubenswrapper[4740]: I1014 13:36:49.832691 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-lib-cinder\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.833376 master-1 kubenswrapper[4740]: I1014 13:36:49.832748 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssxjd\" (UniqueName: \"kubernetes.io/projected/4864df54-8895-424b-85df-f8ce3bc5001e-kube-api-access-ssxjd\") pod \"4864df54-8895-424b-85df-f8ce3bc5001e\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " Oct 14 13:36:49.833376 master-1 kubenswrapper[4740]: I1014 13:36:49.832800 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-swift-storage-0\") pod \"4864df54-8895-424b-85df-f8ce3bc5001e\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " Oct 14 13:36:49.833376 master-1 kubenswrapper[4740]: I1014 13:36:49.832900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-svc\") pod \"4864df54-8895-424b-85df-f8ce3bc5001e\" (UID: \"4864df54-8895-424b-85df-f8ce3bc5001e\") " Oct 14 13:36:49.833376 master-1 kubenswrapper[4740]: I1014 13:36:49.832945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-sys\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.833376 master-1 kubenswrapper[4740]: I1014 13:36:49.832980 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-combined-ca-bundle\") pod \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\" (UID: \"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5\") " Oct 14 13:36:49.834555 master-1 kubenswrapper[4740]: I1014 13:36:49.833807 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.834555 master-1 kubenswrapper[4740]: I1014 13:36:49.833866 4740 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-run\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.834555 master-1 kubenswrapper[4740]: I1014 13:36:49.833896 4740 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-iscsi\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.834555 master-1 kubenswrapper[4740]: I1014 13:36:49.833904 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-dev" (OuterVolumeSpecName: "dev") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.834555 master-1 kubenswrapper[4740]: I1014 13:36:49.833916 4740 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-cinder\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.834555 master-1 kubenswrapper[4740]: I1014 13:36:49.833971 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.835547 master-1 kubenswrapper[4740]: I1014 13:36:49.835250 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.835547 master-1 kubenswrapper[4740]: I1014 13:36:49.835364 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.835677 master-1 kubenswrapper[4740]: I1014 13:36:49.835571 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-sys" (OuterVolumeSpecName: "sys") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.835677 master-1 kubenswrapper[4740]: I1014 13:36:49.835632 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:36:49.836782 master-1 kubenswrapper[4740]: I1014 13:36:49.836740 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-scripts" (OuterVolumeSpecName: "scripts") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:49.838540 master-1 kubenswrapper[4740]: I1014 13:36:49.838502 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-kube-api-access-rr2lx" (OuterVolumeSpecName: "kube-api-access-rr2lx") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "kube-api-access-rr2lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:49.839610 master-1 kubenswrapper[4740]: I1014 13:36:49.839548 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:49.841201 master-1 kubenswrapper[4740]: I1014 13:36:49.841150 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4864df54-8895-424b-85df-f8ce3bc5001e-kube-api-access-ssxjd" (OuterVolumeSpecName: "kube-api-access-ssxjd") pod "4864df54-8895-424b-85df-f8ce3bc5001e" (UID: "4864df54-8895-424b-85df-f8ce3bc5001e"). InnerVolumeSpecName "kube-api-access-ssxjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:49.882834 master-1 kubenswrapper[4740]: I1014 13:36:49.882738 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4864df54-8895-424b-85df-f8ce3bc5001e" (UID: "4864df54-8895-424b-85df-f8ce3bc5001e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:49.890220 master-1 kubenswrapper[4740]: I1014 13:36:49.890000 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4864df54-8895-424b-85df-f8ce3bc5001e" (UID: "4864df54-8895-424b-85df-f8ce3bc5001e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:49.892261 master-1 kubenswrapper[4740]: I1014 13:36:49.892148 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-config" (OuterVolumeSpecName: "config") pod "4864df54-8895-424b-85df-f8ce3bc5001e" (UID: "4864df54-8895-424b-85df-f8ce3bc5001e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:49.894612 master-1 kubenswrapper[4740]: I1014 13:36:49.894564 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4864df54-8895-424b-85df-f8ce3bc5001e" (UID: "4864df54-8895-424b-85df-f8ce3bc5001e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:49.911904 master-1 kubenswrapper[4740]: I1014 13:36:49.911844 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4864df54-8895-424b-85df-f8ce3bc5001e" (UID: "4864df54-8895-424b-85df-f8ce3bc5001e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:49.915680 master-1 kubenswrapper[4740]: I1014 13:36:49.915622 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:49.934928 master-1 kubenswrapper[4740]: I1014 13:36:49.934850 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr2lx\" (UniqueName: \"kubernetes.io/projected/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-kube-api-access-rr2lx\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.934928 master-1 kubenswrapper[4740]: I1014 13:36:49.934895 4740 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-locks-brick\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.934928 master-1 kubenswrapper[4740]: I1014 13:36:49.934909 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.934928 master-1 kubenswrapper[4740]: I1014 13:36:49.934921 4740 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-dev\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.934928 master-1 kubenswrapper[4740]: I1014 13:36:49.934933 4740 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-nvme\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.934946 4740 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-lib-modules\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.934958 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.934969 4740 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-var-lib-cinder\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.934981 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssxjd\" (UniqueName: \"kubernetes.io/projected/4864df54-8895-424b-85df-f8ce3bc5001e-kube-api-access-ssxjd\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.934992 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935003 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935016 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935027 4740 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-sys\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935041 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935052 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4864df54-8895-424b-85df-f8ce3bc5001e-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935063 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.935332 master-1 kubenswrapper[4740]: I1014 13:36:49.935074 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-etc-machine-id\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:49.960699 master-1 kubenswrapper[4740]: I1014 13:36:49.960624 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data" (OuterVolumeSpecName: "config-data") pod "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" (UID: "ef882b5b-c0e5-47ca-ae59-ff311a14cdb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:50.038584 master-1 kubenswrapper[4740]: I1014 13:36:50.037888 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:36:50.322002 master-1 kubenswrapper[4740]: I1014 13:36:50.321740 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" event={"ID":"4864df54-8895-424b-85df-f8ce3bc5001e","Type":"ContainerDied","Data":"01a026a755bde627c0aec104e4db08c55e60b381fbc33fea085e51e1d516fd45"} Oct 14 13:36:50.322002 master-1 kubenswrapper[4740]: I1014 13:36:50.321834 4740 scope.go:117] "RemoveContainer" containerID="03d227c99e07b3086981b44d02b6e02ff2e9d58461f6d5ba85fc4e712af90b49" Oct 14 13:36:50.322002 master-1 kubenswrapper[4740]: I1014 13:36:50.322001 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787cbbf4dc-666ws" Oct 14 13:36:50.328889 master-1 kubenswrapper[4740]: I1014 13:36:50.328843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"ef882b5b-c0e5-47ca-ae59-ff311a14cdb5","Type":"ContainerDied","Data":"040df63f4c18831478f95488323bfca405ec34e27bb57838b1e99fb19a9106bd"} Oct 14 13:36:50.329055 master-1 kubenswrapper[4740]: I1014 13:36:50.328889 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:50.348120 master-1 kubenswrapper[4740]: I1014 13:36:50.348074 4740 scope.go:117] "RemoveContainer" containerID="da0cedfed0fb231148ec99161432858bea96a82e542c40af3d678861d039cb0c" Oct 14 13:36:50.373114 master-1 kubenswrapper[4740]: I1014 13:36:50.372858 4740 scope.go:117] "RemoveContainer" containerID="9ba4a1cb07ba653ec933cd08278473e338b17d6c775d6a8945fd1b50ac774fdf" Oct 14 13:36:50.391439 master-1 kubenswrapper[4740]: I1014 13:36:50.391395 4740 scope.go:117] "RemoveContainer" containerID="eec9e90a49fe748a63927a0b538d1fbd123ea7fe6b9d177dcf595bef2c2b920b" Oct 14 13:36:51.632113 master-1 kubenswrapper[4740]: I1014 13:36:51.632060 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:51.633466 master-1 kubenswrapper[4740]: I1014 13:36:51.633432 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-958c54db4-x58ll" Oct 14 13:36:51.841678 master-1 kubenswrapper[4740]: I1014 13:36:51.841616 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:51.841911 master-1 kubenswrapper[4740]: I1014 13:36:51.841735 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:51.842007 master-1 kubenswrapper[4740]: I1014 13:36:51.841992 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:51.842659 master-1 kubenswrapper[4740]: I1014 13:36:51.842616 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:36:51.843703 master-1 kubenswrapper[4740]: E1014 13:36:51.843080 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:36:51.897856 master-1 kubenswrapper[4740]: I1014 13:36:51.897652 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:52.404453 master-1 kubenswrapper[4740]: I1014 13:36:52.404128 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:36:52.405955 master-1 kubenswrapper[4740]: I1014 13:36:52.405913 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:36:52.406536 master-1 kubenswrapper[4740]: E1014 13:36:52.406501 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:36:52.407486 master-1 kubenswrapper[4740]: I1014 13:36:52.407451 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:36:52.424514 master-1 kubenswrapper[4740]: I1014 13:36:52.423427 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:52.463450 master-1 kubenswrapper[4740]: I1014 13:36:52.461917 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:52.550172 master-1 kubenswrapper[4740]: I1014 13:36:52.550108 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:52.550567 master-1 kubenswrapper[4740]: E1014 13:36:52.550548 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" containerName="dnsmasq-dns" Oct 14 13:36:52.550567 master-1 kubenswrapper[4740]: I1014 13:36:52.550565 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" containerName="dnsmasq-dns" Oct 14 13:36:52.550645 master-1 kubenswrapper[4740]: E1014 13:36:52.550586 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="cinder-backup" Oct 14 13:36:52.550645 master-1 kubenswrapper[4740]: I1014 13:36:52.550595 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="cinder-backup" Oct 14 13:36:52.550645 master-1 kubenswrapper[4740]: E1014 13:36:52.550616 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="probe" Oct 14 13:36:52.550645 master-1 kubenswrapper[4740]: I1014 13:36:52.550622 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="probe" Oct 14 13:36:52.550645 master-1 kubenswrapper[4740]: E1014 13:36:52.550640 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="cinder-scheduler" Oct 14 13:36:52.550645 master-1 kubenswrapper[4740]: I1014 13:36:52.550646 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="cinder-scheduler" Oct 14 13:36:52.550820 master-1 kubenswrapper[4740]: E1014 13:36:52.550658 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" containerName="init" Oct 14 13:36:52.550820 master-1 kubenswrapper[4740]: I1014 13:36:52.550664 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" containerName="init" Oct 14 13:36:52.550820 master-1 kubenswrapper[4740]: E1014 13:36:52.550678 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="probe" Oct 14 13:36:52.550820 master-1 kubenswrapper[4740]: I1014 13:36:52.550683 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="probe" Oct 14 13:36:52.550979 master-1 kubenswrapper[4740]: I1014 13:36:52.550836 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" containerName="dnsmasq-dns" Oct 14 13:36:52.550979 master-1 kubenswrapper[4740]: I1014 13:36:52.550862 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="probe" Oct 14 13:36:52.550979 master-1 kubenswrapper[4740]: I1014 13:36:52.550872 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="cinder-scheduler" Oct 14 13:36:52.550979 master-1 kubenswrapper[4740]: I1014 13:36:52.550885 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" containerName="cinder-backup" Oct 14 13:36:52.550979 master-1 kubenswrapper[4740]: I1014 13:36:52.550901 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" containerName="probe" Oct 14 13:36:52.552158 master-1 kubenswrapper[4740]: I1014 13:36:52.552128 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.557523 master-1 kubenswrapper[4740]: I1014 13:36:52.557481 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-backup-config-data" Oct 14 13:36:52.558874 master-1 kubenswrapper[4740]: I1014 13:36:52.558839 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787cbbf4dc-666ws"] Oct 14 13:36:52.567327 master-1 kubenswrapper[4740]: I1014 13:36:52.567253 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:52.575806 master-1 kubenswrapper[4740]: I1014 13:36:52.575737 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-787cbbf4dc-666ws"] Oct 14 13:36:52.580701 master-1 kubenswrapper[4740]: I1014 13:36:52.580652 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:52.588746 master-1 kubenswrapper[4740]: I1014 13:36:52.588667 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:52.619907 master-1 kubenswrapper[4740]: I1014 13:36:52.619845 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:52.623289 master-1 kubenswrapper[4740]: I1014 13:36:52.622454 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.638956 master-1 kubenswrapper[4740]: I1014 13:36:52.638904 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-46645-scheduler-config-data" Oct 14 13:36:52.686045 master-1 kubenswrapper[4740]: I1014 13:36:52.685990 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702168 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-combined-ca-bundle\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702303 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-nvme\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702397 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvw2\" (UniqueName: \"kubernetes.io/projected/220d2ca3-9d98-42ca-b1b5-7666e46807aa-kube-api-access-nmvw2\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702415 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-config-data\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-config-data-custom\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702456 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-locks-cinder\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.702468 master-1 kubenswrapper[4740]: I1014 13:36:52.702477 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-machine-id\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702511 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-scripts\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-sys\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702540 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-config-data\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702556 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-lib-cinder\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702570 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-iscsi\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-etc-machine-id\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-config-data-custom\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702643 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-dev\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702665 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srp9c\" (UniqueName: \"kubernetes.io/projected/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-kube-api-access-srp9c\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-locks-brick\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702707 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-combined-ca-bundle\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702747 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-lib-modules\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702816 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-scripts\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.703425 master-1 kubenswrapper[4740]: I1014 13:36:52.702858 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-run\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804213 master-1 kubenswrapper[4740]: I1014 13:36:52.804154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-run\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804213 master-1 kubenswrapper[4740]: I1014 13:36:52.804212 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-combined-ca-bundle\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-nvme\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804284 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-config-data\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvw2\" (UniqueName: \"kubernetes.io/projected/220d2ca3-9d98-42ca-b1b5-7666e46807aa-kube-api-access-nmvw2\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804321 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-config-data-custom\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-locks-cinder\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804363 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-machine-id\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-scripts\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804405 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-sys\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-config-data\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-lib-cinder\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-iscsi\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804473 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-etc-machine-id\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.804495 master-1 kubenswrapper[4740]: I1014 13:36:52.804495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-config-data-custom\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804951 master-1 kubenswrapper[4740]: I1014 13:36:52.804519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-dev\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804951 master-1 kubenswrapper[4740]: I1014 13:36:52.804542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srp9c\" (UniqueName: \"kubernetes.io/projected/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-kube-api-access-srp9c\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.804951 master-1 kubenswrapper[4740]: I1014 13:36:52.804561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-locks-brick\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804951 master-1 kubenswrapper[4740]: I1014 13:36:52.804600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-combined-ca-bundle\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.804951 master-1 kubenswrapper[4740]: I1014 13:36:52.804635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-lib-modules\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.804951 master-1 kubenswrapper[4740]: I1014 13:36:52.804653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-scripts\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.809416 master-1 kubenswrapper[4740]: I1014 13:36:52.809176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-sys\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.830315 master-1 kubenswrapper[4740]: I1014 13:36:52.822513 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-run\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.830315 master-1 kubenswrapper[4740]: I1014 13:36:52.827017 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-scripts\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.830315 master-1 kubenswrapper[4740]: I1014 13:36:52.827076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-nvme\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.830315 master-1 kubenswrapper[4740]: I1014 13:36:52.827208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-locks-cinder\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.831379 master-1 kubenswrapper[4740]: I1014 13:36:52.831329 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-machine-id\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.834674 master-1 kubenswrapper[4740]: I1014 13:36:52.834630 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-scripts\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.835323 master-1 kubenswrapper[4740]: I1014 13:36:52.835258 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-combined-ca-bundle\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.836881 master-1 kubenswrapper[4740]: I1014 13:36:52.836845 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-config-data\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.836968 master-1 kubenswrapper[4740]: I1014 13:36:52.836939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-dev\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.837019 master-1 kubenswrapper[4740]: I1014 13:36:52.836981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-lib-cinder\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.837019 master-1 kubenswrapper[4740]: I1014 13:36:52.837004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-etc-iscsi\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.837105 master-1 kubenswrapper[4740]: I1014 13:36:52.837041 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-etc-machine-id\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.840127 master-1 kubenswrapper[4740]: I1014 13:36:52.840081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-config-data\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.840475 master-1 kubenswrapper[4740]: I1014 13:36:52.840398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-config-data-custom\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.840579 master-1 kubenswrapper[4740]: I1014 13:36:52.840547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-var-locks-brick\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.840652 master-1 kubenswrapper[4740]: I1014 13:36:52.840592 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/220d2ca3-9d98-42ca-b1b5-7666e46807aa-lib-modules\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.842673 master-1 kubenswrapper[4740]: I1014 13:36:52.842643 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/220d2ca3-9d98-42ca-b1b5-7666e46807aa-config-data-custom\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.847254 master-1 kubenswrapper[4740]: I1014 13:36:52.845211 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-combined-ca-bundle\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.851257 master-1 kubenswrapper[4740]: I1014 13:36:52.850894 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvw2\" (UniqueName: \"kubernetes.io/projected/220d2ca3-9d98-42ca-b1b5-7666e46807aa-kube-api-access-nmvw2\") pod \"cinder-46645-backup-0\" (UID: \"220d2ca3-9d98-42ca-b1b5-7666e46807aa\") " pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.866258 master-1 kubenswrapper[4740]: I1014 13:36:52.865923 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srp9c\" (UniqueName: \"kubernetes.io/projected/1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0-kube-api-access-srp9c\") pod \"cinder-46645-scheduler-0\" (UID: \"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0\") " pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:52.877742 master-1 kubenswrapper[4740]: I1014 13:36:52.877676 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:52.956576 master-1 kubenswrapper[4740]: I1014 13:36:52.955018 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4864df54-8895-424b-85df-f8ce3bc5001e" path="/var/lib/kubelet/pods/4864df54-8895-424b-85df-f8ce3bc5001e/volumes" Oct 14 13:36:52.956576 master-1 kubenswrapper[4740]: I1014 13:36:52.956337 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef882b5b-c0e5-47ca-ae59-ff311a14cdb5" path="/var/lib/kubelet/pods/ef882b5b-c0e5-47ca-ae59-ff311a14cdb5/volumes" Oct 14 13:36:52.961696 master-1 kubenswrapper[4740]: I1014 13:36:52.961643 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f653157f-4652-49a8-a3f6-0d952ce477f5" path="/var/lib/kubelet/pods/f653157f-4652-49a8-a3f6-0d952ce477f5/volumes" Oct 14 13:36:52.965352 master-1 kubenswrapper[4740]: I1014 13:36:52.962957 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:53.147578 master-1 kubenswrapper[4740]: I1014 13:36:53.147327 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 13:36:53.149370 master-1 kubenswrapper[4740]: I1014 13:36:53.149069 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:36:53.152948 master-1 kubenswrapper[4740]: I1014 13:36:53.152913 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 14 13:36:53.155710 master-1 kubenswrapper[4740]: I1014 13:36:53.155687 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 14 13:36:53.175980 master-1 kubenswrapper[4740]: I1014 13:36:53.175838 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 13:36:53.313641 master-1 kubenswrapper[4740]: I1014 13:36:53.313600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f852429f-606a-43cc-a4ec-e64ab8a24315-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.314030 master-1 kubenswrapper[4740]: I1014 13:36:53.313960 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f852429f-606a-43cc-a4ec-e64ab8a24315-openstack-config\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.314167 master-1 kubenswrapper[4740]: I1014 13:36:53.314121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvkd\" (UniqueName: \"kubernetes.io/projected/f852429f-606a-43cc-a4ec-e64ab8a24315-kube-api-access-wfvkd\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.314698 master-1 kubenswrapper[4740]: I1014 13:36:53.314652 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f852429f-606a-43cc-a4ec-e64ab8a24315-openstack-config-secret\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.413199 master-1 kubenswrapper[4740]: I1014 13:36:53.413135 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:36:53.413563 master-1 kubenswrapper[4740]: I1014 13:36:53.413361 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:36:53.413740 master-1 kubenswrapper[4740]: E1014 13:36:53.413671 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:36:53.416887 master-1 kubenswrapper[4740]: I1014 13:36:53.416033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfvkd\" (UniqueName: \"kubernetes.io/projected/f852429f-606a-43cc-a4ec-e64ab8a24315-kube-api-access-wfvkd\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.416887 master-1 kubenswrapper[4740]: I1014 13:36:53.416189 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f852429f-606a-43cc-a4ec-e64ab8a24315-openstack-config-secret\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.416887 master-1 kubenswrapper[4740]: I1014 13:36:53.416242 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f852429f-606a-43cc-a4ec-e64ab8a24315-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.416887 master-1 kubenswrapper[4740]: I1014 13:36:53.416276 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f852429f-606a-43cc-a4ec-e64ab8a24315-openstack-config\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.417876 master-1 kubenswrapper[4740]: I1014 13:36:53.417740 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f852429f-606a-43cc-a4ec-e64ab8a24315-openstack-config\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.422675 master-1 kubenswrapper[4740]: I1014 13:36:53.420484 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f852429f-606a-43cc-a4ec-e64ab8a24315-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.422675 master-1 kubenswrapper[4740]: I1014 13:36:53.420772 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f852429f-606a-43cc-a4ec-e64ab8a24315-openstack-config-secret\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.450288 master-1 kubenswrapper[4740]: I1014 13:36:53.450125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfvkd\" (UniqueName: \"kubernetes.io/projected/f852429f-606a-43cc-a4ec-e64ab8a24315-kube-api-access-wfvkd\") pod \"openstackclient\" (UID: \"f852429f-606a-43cc-a4ec-e64ab8a24315\") " pod="openstack/openstackclient" Oct 14 13:36:53.477965 master-1 kubenswrapper[4740]: I1014 13:36:53.477117 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:36:53.499627 master-1 kubenswrapper[4740]: I1014 13:36:53.499549 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-backup-0"] Oct 14 13:36:53.504568 master-1 kubenswrapper[4740]: W1014 13:36:53.504515 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220d2ca3_9d98_42ca_b1b5_7666e46807aa.slice/crio-16015b0f49081efaa3855fc8cba42aeed2fb79c457e23d3f29d7f5c854c31280 WatchSource:0}: Error finding container 16015b0f49081efaa3855fc8cba42aeed2fb79c457e23d3f29d7f5c854c31280: Status 404 returned error can't find the container with id 16015b0f49081efaa3855fc8cba42aeed2fb79c457e23d3f29d7f5c854c31280 Oct 14 13:36:53.643798 master-1 kubenswrapper[4740]: I1014 13:36:53.643729 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46645-scheduler-0"] Oct 14 13:36:53.661704 master-1 kubenswrapper[4740]: W1014 13:36:53.660084 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a0dd8ed_4be7_4ee6_a5bf_e466b57751c0.slice/crio-e49725414fd953ad6c12f3cdeebb92b6de6011a7310fc67b75dfd6886ab98be4 WatchSource:0}: Error finding container e49725414fd953ad6c12f3cdeebb92b6de6011a7310fc67b75dfd6886ab98be4: Status 404 returned error can't find the container with id e49725414fd953ad6c12f3cdeebb92b6de6011a7310fc67b75dfd6886ab98be4 Oct 14 13:36:53.963212 master-1 kubenswrapper[4740]: I1014 13:36:53.963136 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:36:54.004714 master-1 kubenswrapper[4740]: I1014 13:36:54.004212 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 13:36:54.007135 master-1 kubenswrapper[4740]: W1014 13:36:54.007005 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf852429f_606a_43cc_a4ec_e64ab8a24315.slice/crio-c3d2fcb1f91fb36596435e7a5e2d60a269329b3425ffa266b7f3c7acfb1e75fd WatchSource:0}: Error finding container c3d2fcb1f91fb36596435e7a5e2d60a269329b3425ffa266b7f3c7acfb1e75fd: Status 404 returned error can't find the container with id c3d2fcb1f91fb36596435e7a5e2d60a269329b3425ffa266b7f3c7acfb1e75fd Oct 14 13:36:54.433995 master-1 kubenswrapper[4740]: I1014 13:36:54.433902 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f852429f-606a-43cc-a4ec-e64ab8a24315","Type":"ContainerStarted","Data":"c3d2fcb1f91fb36596435e7a5e2d60a269329b3425ffa266b7f3c7acfb1e75fd"} Oct 14 13:36:54.439932 master-1 kubenswrapper[4740]: I1014 13:36:54.439866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"220d2ca3-9d98-42ca-b1b5-7666e46807aa","Type":"ContainerStarted","Data":"9445b4d084375a05616619b3d5b791064cf4306ddab12e688a2057e529b20e7c"} Oct 14 13:36:54.440065 master-1 kubenswrapper[4740]: I1014 13:36:54.439989 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"220d2ca3-9d98-42ca-b1b5-7666e46807aa","Type":"ContainerStarted","Data":"69b380863b10aedb40b98f81b378218c6aee3dc84d34a601376331c85e91cf59"} Oct 14 13:36:54.440065 master-1 kubenswrapper[4740]: I1014 13:36:54.440004 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-backup-0" event={"ID":"220d2ca3-9d98-42ca-b1b5-7666e46807aa","Type":"ContainerStarted","Data":"16015b0f49081efaa3855fc8cba42aeed2fb79c457e23d3f29d7f5c854c31280"} Oct 14 13:36:54.444249 master-1 kubenswrapper[4740]: I1014 13:36:54.443049 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0","Type":"ContainerStarted","Data":"f3fc02134004d39af4b6e2a7e2f9f83eabab076c651bdba52f8690e88301cfbc"} Oct 14 13:36:54.444249 master-1 kubenswrapper[4740]: I1014 13:36:54.443102 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0","Type":"ContainerStarted","Data":"e49725414fd953ad6c12f3cdeebb92b6de6011a7310fc67b75dfd6886ab98be4"} Oct 14 13:36:54.474450 master-1 kubenswrapper[4740]: I1014 13:36:54.474352 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-backup-0" podStartSLOduration=2.474331066 podStartE2EDuration="2.474331066s" podCreationTimestamp="2025-10-14 13:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:54.470762362 +0000 UTC m=+1840.281051691" watchObservedRunningTime="2025-10-14 13:36:54.474331066 +0000 UTC m=+1840.284620405" Oct 14 13:36:54.951128 master-1 kubenswrapper[4740]: I1014 13:36:54.951053 4740 scope.go:117] "RemoveContainer" containerID="28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc" Oct 14 13:36:54.951615 master-1 kubenswrapper[4740]: E1014 13:36:54.951360 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:36:55.035873 master-1 kubenswrapper[4740]: I1014 13:36:55.035757 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f85dff564-q5t6l"] Oct 14 13:36:55.038003 master-1 kubenswrapper[4740]: I1014 13:36:55.037949 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.042098 master-1 kubenswrapper[4740]: I1014 13:36:55.042059 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 13:36:55.042648 master-1 kubenswrapper[4740]: I1014 13:36:55.042314 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 13:36:55.042648 master-1 kubenswrapper[4740]: I1014 13:36:55.042459 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 14 13:36:55.043122 master-1 kubenswrapper[4740]: I1014 13:36:55.043100 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 14 13:36:55.043349 master-1 kubenswrapper[4740]: I1014 13:36:55.043314 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 14 13:36:55.085214 master-1 kubenswrapper[4740]: I1014 13:36:55.085150 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f85dff564-q5t6l"] Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156468 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-etc-swift\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156553 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vh5\" (UniqueName: \"kubernetes.io/projected/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-kube-api-access-75vh5\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156578 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-public-tls-certs\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-run-httpd\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156629 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-log-httpd\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-config-data\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156720 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-internal-tls-certs\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.156902 master-1 kubenswrapper[4740]: I1014 13:36:55.156757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-combined-ca-bundle\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258002 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-etc-swift\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258086 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vh5\" (UniqueName: \"kubernetes.io/projected/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-kube-api-access-75vh5\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-public-tls-certs\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258144 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-run-httpd\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-log-httpd\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-config-data\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258261 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-internal-tls-certs\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.258313 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-combined-ca-bundle\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.259402 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-log-httpd\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.259895 master-1 kubenswrapper[4740]: I1014 13:36:55.259709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-run-httpd\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.265269 master-1 kubenswrapper[4740]: I1014 13:36:55.261763 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-public-tls-certs\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.265269 master-1 kubenswrapper[4740]: I1014 13:36:55.262415 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-etc-swift\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.265269 master-1 kubenswrapper[4740]: I1014 13:36:55.262871 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-combined-ca-bundle\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.265269 master-1 kubenswrapper[4740]: I1014 13:36:55.263668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-config-data\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.269261 master-1 kubenswrapper[4740]: I1014 13:36:55.266033 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-internal-tls-certs\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.285698 master-1 kubenswrapper[4740]: I1014 13:36:55.285288 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vh5\" (UniqueName: \"kubernetes.io/projected/c5561ae4-eb1f-47ba-929b-c2b25b1efc8f-kube-api-access-75vh5\") pod \"swift-proxy-f85dff564-q5t6l\" (UID: \"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f\") " pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.450214 master-1 kubenswrapper[4740]: I1014 13:36:55.450158 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:55.496158 master-1 kubenswrapper[4740]: I1014 13:36:55.495812 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46645-scheduler-0" event={"ID":"1a0dd8ed-4be7-4ee6-a5bf-e466b57751c0","Type":"ContainerStarted","Data":"c6437be414b3e87f65a6515dad04d59ce84f96c6874aed4f09243b88e78389e0"} Oct 14 13:36:55.665250 master-1 kubenswrapper[4740]: I1014 13:36:55.663401 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-46645-scheduler-0" podStartSLOduration=3.663379984 podStartE2EDuration="3.663379984s" podCreationTimestamp="2025-10-14 13:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:55.531897382 +0000 UTC m=+1841.342186721" watchObservedRunningTime="2025-10-14 13:36:55.663379984 +0000 UTC m=+1841.473669333" Oct 14 13:36:55.997885 master-1 kubenswrapper[4740]: I1014 13:36:55.997814 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f85dff564-q5t6l"] Oct 14 13:36:56.016472 master-1 kubenswrapper[4740]: W1014 13:36:56.016420 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5561ae4_eb1f_47ba_929b_c2b25b1efc8f.slice/crio-382c3ecb019e335b7550226476edfa9191b2be58ed74b410e8ae3b8b076f996b WatchSource:0}: Error finding container 382c3ecb019e335b7550226476edfa9191b2be58ed74b410e8ae3b8b076f996b: Status 404 returned error can't find the container with id 382c3ecb019e335b7550226476edfa9191b2be58ed74b410e8ae3b8b076f996b Oct 14 13:36:56.536200 master-1 kubenswrapper[4740]: I1014 13:36:56.530242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f85dff564-q5t6l" event={"ID":"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f","Type":"ContainerStarted","Data":"b295d865b29e93ab6e1022c877bb52c110df1dd670960335aa837d32a232946f"} Oct 14 13:36:56.536200 master-1 kubenswrapper[4740]: I1014 13:36:56.530301 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f85dff564-q5t6l" event={"ID":"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f","Type":"ContainerStarted","Data":"382c3ecb019e335b7550226476edfa9191b2be58ed74b410e8ae3b8b076f996b"} Oct 14 13:36:56.743249 master-1 kubenswrapper[4740]: I1014 13:36:56.733931 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:36:57.587674 master-1 kubenswrapper[4740]: I1014 13:36:57.587581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f85dff564-q5t6l" event={"ID":"c5561ae4-eb1f-47ba-929b-c2b25b1efc8f","Type":"ContainerStarted","Data":"391acb064afae1ff57aeef1e988067b6361301fcc63f7f2d799623191a2c068a"} Oct 14 13:36:57.589130 master-1 kubenswrapper[4740]: I1014 13:36:57.589080 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:57.590434 master-1 kubenswrapper[4740]: I1014 13:36:57.590418 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:36:57.877459 master-1 kubenswrapper[4740]: I1014 13:36:57.877317 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f85dff564-q5t6l" podStartSLOduration=3.877295586 podStartE2EDuration="3.877295586s" podCreationTimestamp="2025-10-14 13:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:57.644899897 +0000 UTC m=+1843.455189236" watchObservedRunningTime="2025-10-14 13:36:57.877295586 +0000 UTC m=+1843.687584915" Oct 14 13:36:57.881519 master-1 kubenswrapper[4740]: I1014 13:36:57.879623 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7588d45c67-s98sq"] Oct 14 13:36:57.881519 master-1 kubenswrapper[4740]: I1014 13:36:57.881327 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:57.882176 master-1 kubenswrapper[4740]: I1014 13:36:57.882150 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-46645-backup-0" Oct 14 13:36:57.887824 master-1 kubenswrapper[4740]: I1014 13:36:57.887756 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 14 13:36:57.888250 master-1 kubenswrapper[4740]: I1014 13:36:57.888206 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 14 13:36:57.892665 master-1 kubenswrapper[4740]: I1014 13:36:57.892609 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7588d45c67-s98sq"] Oct 14 13:36:57.968533 master-1 kubenswrapper[4740]: I1014 13:36:57.968471 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:36:57.992345 master-1 kubenswrapper[4740]: I1014 13:36:57.992277 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data-custom\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:57.992345 master-1 kubenswrapper[4740]: I1014 13:36:57.992346 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wx2\" (UniqueName: \"kubernetes.io/projected/2a30d790-7aaa-4754-a568-1a3804649217-kube-api-access-h5wx2\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:57.992584 master-1 kubenswrapper[4740]: I1014 13:36:57.992456 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:57.992584 master-1 kubenswrapper[4740]: I1014 13:36:57.992499 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-combined-ca-bundle\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.096242 master-1 kubenswrapper[4740]: I1014 13:36:58.094420 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.096242 master-1 kubenswrapper[4740]: I1014 13:36:58.094745 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-combined-ca-bundle\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.096242 master-1 kubenswrapper[4740]: I1014 13:36:58.094852 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data-custom\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.096242 master-1 kubenswrapper[4740]: I1014 13:36:58.094873 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wx2\" (UniqueName: \"kubernetes.io/projected/2a30d790-7aaa-4754-a568-1a3804649217-kube-api-access-h5wx2\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.100257 master-1 kubenswrapper[4740]: I1014 13:36:58.100118 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data-custom\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.105259 master-1 kubenswrapper[4740]: I1014 13:36:58.100660 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.108464 master-1 kubenswrapper[4740]: I1014 13:36:58.107419 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-combined-ca-bundle\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.233658 master-1 kubenswrapper[4740]: I1014 13:36:58.233603 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wx2\" (UniqueName: \"kubernetes.io/projected/2a30d790-7aaa-4754-a568-1a3804649217-kube-api-access-h5wx2\") pod \"heat-engine-7588d45c67-s98sq\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.243765 master-1 kubenswrapper[4740]: I1014 13:36:58.243688 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:58.299996 master-1 kubenswrapper[4740]: I1014 13:36:58.299951 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-46645-api-2" Oct 14 13:36:58.323821 master-1 kubenswrapper[4740]: I1014 13:36:58.323634 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8b568997-972jn"] Oct 14 13:36:58.325910 master-1 kubenswrapper[4740]: I1014 13:36:58.325880 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.339195 master-1 kubenswrapper[4740]: I1014 13:36:58.339101 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8b568997-972jn"] Oct 14 13:36:58.411319 master-1 kubenswrapper[4740]: I1014 13:36:58.404975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-svc\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.411319 master-1 kubenswrapper[4740]: I1014 13:36:58.405097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrv5\" (UniqueName: \"kubernetes.io/projected/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-kube-api-access-rvrv5\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.411319 master-1 kubenswrapper[4740]: I1014 13:36:58.405197 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.411319 master-1 kubenswrapper[4740]: I1014 13:36:58.405251 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.411319 master-1 kubenswrapper[4740]: I1014 13:36:58.405311 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-config\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.411319 master-1 kubenswrapper[4740]: I1014 13:36:58.405354 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.507256 master-1 kubenswrapper[4740]: I1014 13:36:58.506972 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrv5\" (UniqueName: \"kubernetes.io/projected/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-kube-api-access-rvrv5\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.507256 master-1 kubenswrapper[4740]: I1014 13:36:58.507085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.507256 master-1 kubenswrapper[4740]: I1014 13:36:58.507113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.507256 master-1 kubenswrapper[4740]: I1014 13:36:58.507139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-config\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.507549 master-1 kubenswrapper[4740]: I1014 13:36:58.507171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.507549 master-1 kubenswrapper[4740]: I1014 13:36:58.507414 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-svc\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.511262 master-1 kubenswrapper[4740]: I1014 13:36:58.508704 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-svc\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.515251 master-1 kubenswrapper[4740]: I1014 13:36:58.511415 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.515251 master-1 kubenswrapper[4740]: I1014 13:36:58.512204 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.515251 master-1 kubenswrapper[4740]: I1014 13:36:58.512615 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-config\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.515251 master-1 kubenswrapper[4740]: I1014 13:36:58.512840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.556249 master-1 kubenswrapper[4740]: I1014 13:36:58.555391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrv5\" (UniqueName: \"kubernetes.io/projected/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-kube-api-access-rvrv5\") pod \"dnsmasq-dns-6f8b568997-972jn\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.710251 master-1 kubenswrapper[4740]: I1014 13:36:58.709336 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:36:58.766256 master-1 kubenswrapper[4740]: I1014 13:36:58.755540 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7588d45c67-s98sq"] Oct 14 13:36:58.784375 master-1 kubenswrapper[4740]: W1014 13:36:58.783046 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a30d790_7aaa_4754_a568_1a3804649217.slice/crio-25e6d4f9f82daf31a37398c8060aac77d2c685d18b6942d85a7a3df7b468378d WatchSource:0}: Error finding container 25e6d4f9f82daf31a37398c8060aac77d2c685d18b6942d85a7a3df7b468378d: Status 404 returned error can't find the container with id 25e6d4f9f82daf31a37398c8060aac77d2c685d18b6942d85a7a3df7b468378d Oct 14 13:36:59.332656 master-1 kubenswrapper[4740]: I1014 13:36:59.331294 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8b568997-972jn"] Oct 14 13:36:59.620256 master-1 kubenswrapper[4740]: I1014 13:36:59.620163 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8b568997-972jn" event={"ID":"c40f97f4-5012-4f9c-bb3b-5bb53d3544be","Type":"ContainerStarted","Data":"9e866b457ee329dfd5d0b2f39e75e97a6c3b90210ba2c5975d723768b51a288b"} Oct 14 13:36:59.626080 master-1 kubenswrapper[4740]: I1014 13:36:59.624610 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7588d45c67-s98sq" event={"ID":"2a30d790-7aaa-4754-a568-1a3804649217","Type":"ContainerStarted","Data":"51771210c49aed6393c59011363ce93f2dcdafd2b01f355b3ad1bc2d81ce7fd7"} Oct 14 13:36:59.626080 master-1 kubenswrapper[4740]: I1014 13:36:59.624704 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7588d45c67-s98sq" event={"ID":"2a30d790-7aaa-4754-a568-1a3804649217","Type":"ContainerStarted","Data":"25e6d4f9f82daf31a37398c8060aac77d2c685d18b6942d85a7a3df7b468378d"} Oct 14 13:36:59.626080 master-1 kubenswrapper[4740]: I1014 13:36:59.625516 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:36:59.668081 master-1 kubenswrapper[4740]: I1014 13:36:59.662206 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7588d45c67-s98sq" podStartSLOduration=2.662186803 podStartE2EDuration="2.662186803s" podCreationTimestamp="2025-10-14 13:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:59.65942045 +0000 UTC m=+1845.469709779" watchObservedRunningTime="2025-10-14 13:36:59.662186803 +0000 UTC m=+1845.472476132" Oct 14 13:37:00.633862 master-1 kubenswrapper[4740]: I1014 13:37:00.633802 4740 generic.go:334] "Generic (PLEG): container finished" podID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerID="53da13037470673ca6135247826d3dac951c542ca939362e73081083c420aaa2" exitCode=0 Oct 14 13:37:00.634701 master-1 kubenswrapper[4740]: I1014 13:37:00.633900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8b568997-972jn" event={"ID":"c40f97f4-5012-4f9c-bb3b-5bb53d3544be","Type":"ContainerDied","Data":"53da13037470673ca6135247826d3dac951c542ca939362e73081083c420aaa2"} Oct 14 13:37:01.650689 master-1 kubenswrapper[4740]: I1014 13:37:01.650593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8b568997-972jn" event={"ID":"c40f97f4-5012-4f9c-bb3b-5bb53d3544be","Type":"ContainerStarted","Data":"3a1f848ad17cf9bd1575faaa3b50e700f50c0398c2977c590e297d1c7978a8c7"} Oct 14 13:37:01.651333 master-1 kubenswrapper[4740]: I1014 13:37:01.650931 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:37:01.699258 master-1 kubenswrapper[4740]: I1014 13:37:01.698538 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f8b568997-972jn" podStartSLOduration=3.69852004 podStartE2EDuration="3.69852004s" podCreationTimestamp="2025-10-14 13:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:37:01.692201383 +0000 UTC m=+1847.502490712" watchObservedRunningTime="2025-10-14 13:37:01.69852004 +0000 UTC m=+1847.508809369" Oct 14 13:37:03.126528 master-1 kubenswrapper[4740]: I1014 13:37:03.126467 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-46645-backup-0" Oct 14 13:37:03.173800 master-1 kubenswrapper[4740]: I1014 13:37:03.173733 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-46645-scheduler-0" Oct 14 13:37:05.236042 master-1 kubenswrapper[4740]: I1014 13:37:05.235932 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5fdd5c7f69-nmdkf"] Oct 14 13:37:05.238089 master-1 kubenswrapper[4740]: I1014 13:37:05.238040 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.290384 master-1 kubenswrapper[4740]: I1014 13:37:05.290312 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5fdd5c7f69-nmdkf"] Oct 14 13:37:05.329985 master-1 kubenswrapper[4740]: I1014 13:37:05.328988 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-config-data-custom\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.329985 master-1 kubenswrapper[4740]: I1014 13:37:05.329244 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-combined-ca-bundle\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.329985 master-1 kubenswrapper[4740]: I1014 13:37:05.329350 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-config-data\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.329985 master-1 kubenswrapper[4740]: I1014 13:37:05.329521 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqhx\" (UniqueName: \"kubernetes.io/projected/ca413fda-083a-45c2-a544-c0c38f23632f-kube-api-access-glqhx\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.436002 master-1 kubenswrapper[4740]: I1014 13:37:05.435054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqhx\" (UniqueName: \"kubernetes.io/projected/ca413fda-083a-45c2-a544-c0c38f23632f-kube-api-access-glqhx\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.436002 master-1 kubenswrapper[4740]: I1014 13:37:05.435149 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-config-data-custom\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.436002 master-1 kubenswrapper[4740]: I1014 13:37:05.435266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-combined-ca-bundle\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.436002 master-1 kubenswrapper[4740]: I1014 13:37:05.435369 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-config-data\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.446295 master-1 kubenswrapper[4740]: I1014 13:37:05.443390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-combined-ca-bundle\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.447205 master-1 kubenswrapper[4740]: I1014 13:37:05.447147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-config-data-custom\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.458592 master-1 kubenswrapper[4740]: I1014 13:37:05.458498 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:37:05.460774 master-1 kubenswrapper[4740]: I1014 13:37:05.460383 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f85dff564-q5t6l" Oct 14 13:37:05.466044 master-1 kubenswrapper[4740]: I1014 13:37:05.465992 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqhx\" (UniqueName: \"kubernetes.io/projected/ca413fda-083a-45c2-a544-c0c38f23632f-kube-api-access-glqhx\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.468075 master-1 kubenswrapper[4740]: I1014 13:37:05.468044 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca413fda-083a-45c2-a544-c0c38f23632f-config-data\") pod \"heat-engine-5fdd5c7f69-nmdkf\" (UID: \"ca413fda-083a-45c2-a544-c0c38f23632f\") " pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.574373 master-1 kubenswrapper[4740]: I1014 13:37:05.573850 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:05.943982 master-1 kubenswrapper[4740]: I1014 13:37:05.943934 4740 scope.go:117] "RemoveContainer" containerID="28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc" Oct 14 13:37:06.647333 master-1 kubenswrapper[4740]: I1014 13:37:06.647271 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-888f485ff-gvvzl"] Oct 14 13:37:06.648617 master-1 kubenswrapper[4740]: I1014 13:37:06.648595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.655185 master-1 kubenswrapper[4740]: I1014 13:37:06.655137 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 14 13:37:06.655258 master-1 kubenswrapper[4740]: I1014 13:37:06.655216 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 14 13:37:06.655485 master-1 kubenswrapper[4740]: I1014 13:37:06.655377 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 14 13:37:06.670898 master-1 kubenswrapper[4740]: I1014 13:37:06.670831 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-888f485ff-gvvzl"] Oct 14 13:37:06.767656 master-1 kubenswrapper[4740]: I1014 13:37:06.767590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-internal-tls-certs\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.767883 master-1 kubenswrapper[4740]: I1014 13:37:06.767667 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-public-tls-certs\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.767883 master-1 kubenswrapper[4740]: I1014 13:37:06.767745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-combined-ca-bundle\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.767883 master-1 kubenswrapper[4740]: I1014 13:37:06.767825 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-config-data-custom\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.767883 master-1 kubenswrapper[4740]: I1014 13:37:06.767851 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbst\" (UniqueName: \"kubernetes.io/projected/afd108f8-0a2c-427b-a952-7863fce4ffae-kube-api-access-lsbst\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.768020 master-1 kubenswrapper[4740]: I1014 13:37:06.767901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-config-data\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.869732 master-1 kubenswrapper[4740]: I1014 13:37:06.869668 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-combined-ca-bundle\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.869943 master-1 kubenswrapper[4740]: I1014 13:37:06.869755 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-config-data-custom\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.869943 master-1 kubenswrapper[4740]: I1014 13:37:06.869778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbst\" (UniqueName: \"kubernetes.io/projected/afd108f8-0a2c-427b-a952-7863fce4ffae-kube-api-access-lsbst\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.869943 master-1 kubenswrapper[4740]: I1014 13:37:06.869803 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-config-data\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.869943 master-1 kubenswrapper[4740]: I1014 13:37:06.869866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-internal-tls-certs\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.869943 master-1 kubenswrapper[4740]: I1014 13:37:06.869896 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-public-tls-certs\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.873857 master-1 kubenswrapper[4740]: I1014 13:37:06.873823 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-public-tls-certs\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.874334 master-1 kubenswrapper[4740]: I1014 13:37:06.874282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-config-data-custom\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.875612 master-1 kubenswrapper[4740]: I1014 13:37:06.875556 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-combined-ca-bundle\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.875682 master-1 kubenswrapper[4740]: I1014 13:37:06.875563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-internal-tls-certs\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.876494 master-1 kubenswrapper[4740]: I1014 13:37:06.876448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd108f8-0a2c-427b-a952-7863fce4ffae-config-data\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.904844 master-1 kubenswrapper[4740]: I1014 13:37:06.904772 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbst\" (UniqueName: \"kubernetes.io/projected/afd108f8-0a2c-427b-a952-7863fce4ffae-kube-api-access-lsbst\") pod \"heat-api-888f485ff-gvvzl\" (UID: \"afd108f8-0a2c-427b-a952-7863fce4ffae\") " pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:06.999588 master-1 kubenswrapper[4740]: I1014 13:37:06.999504 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:07.944043 master-1 kubenswrapper[4740]: I1014 13:37:07.943967 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:37:07.945832 master-1 kubenswrapper[4740]: I1014 13:37:07.945751 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:08.299791 master-1 kubenswrapper[4740]: I1014 13:37:08.298819 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:37:08.740217 master-1 kubenswrapper[4740]: I1014 13:37:08.736937 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:37:08.770748 master-1 kubenswrapper[4740]: I1014 13:37:08.765502 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5fdd5c7f69-nmdkf"] Oct 14 13:37:08.801490 master-1 kubenswrapper[4740]: I1014 13:37:08.801281 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f852429f-606a-43cc-a4ec-e64ab8a24315","Type":"ContainerStarted","Data":"05fd94362b1caffe24822a807e1676ae36cc706244fe3c886e957134d9af2b4b"} Oct 14 13:37:08.821490 master-1 kubenswrapper[4740]: I1014 13:37:08.819354 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerStarted","Data":"6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6"} Oct 14 13:37:08.828833 master-1 kubenswrapper[4740]: I1014 13:37:08.827586 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-888f485ff-gvvzl"] Oct 14 13:37:08.848802 master-1 kubenswrapper[4740]: I1014 13:37:08.848719 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7619599689999998 podStartE2EDuration="15.848700135s" podCreationTimestamp="2025-10-14 13:36:53 +0000 UTC" firstStartedPulling="2025-10-14 13:36:54.010060691 +0000 UTC m=+1839.820350020" lastFinishedPulling="2025-10-14 13:37:08.096800857 +0000 UTC m=+1853.907090186" observedRunningTime="2025-10-14 13:37:08.841040063 +0000 UTC m=+1854.651329392" watchObservedRunningTime="2025-10-14 13:37:08.848700135 +0000 UTC m=+1854.658989464" Oct 14 13:37:09.842782 master-1 kubenswrapper[4740]: I1014 13:37:09.838530 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6"} Oct 14 13:37:09.842782 master-1 kubenswrapper[4740]: I1014 13:37:09.840968 4740 scope.go:117] "RemoveContainer" containerID="28a0443fce7c8344840417a03e93a9362711545a57eafed187ec416fc5ed0bdc" Oct 14 13:37:09.842782 master-1 kubenswrapper[4740]: I1014 13:37:09.842399 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" exitCode=1 Oct 14 13:37:09.847160 master-1 kubenswrapper[4740]: I1014 13:37:09.843914 4740 scope.go:117] "RemoveContainer" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" Oct 14 13:37:09.847160 master-1 kubenswrapper[4740]: E1014 13:37:09.844334 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:37:09.847160 master-1 kubenswrapper[4740]: I1014 13:37:09.845157 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerID="e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d" exitCode=1 Oct 14 13:37:09.847160 master-1 kubenswrapper[4740]: I1014 13:37:09.845266 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerDied","Data":"e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d"} Oct 14 13:37:09.847160 master-1 kubenswrapper[4740]: I1014 13:37:09.846143 4740 scope.go:117] "RemoveContainer" containerID="e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d" Oct 14 13:37:09.847160 master-1 kubenswrapper[4740]: E1014 13:37:09.846449 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:37:09.848437 master-1 kubenswrapper[4740]: I1014 13:37:09.847773 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:09.849589 master-1 kubenswrapper[4740]: I1014 13:37:09.849556 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" event={"ID":"ca413fda-083a-45c2-a544-c0c38f23632f","Type":"ContainerStarted","Data":"6cc0bb008a8f4f96513be376581be86bf9a81d461f64c5103204f76c7d7cdd71"} Oct 14 13:37:09.849589 master-1 kubenswrapper[4740]: I1014 13:37:09.849589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" event={"ID":"ca413fda-083a-45c2-a544-c0c38f23632f","Type":"ContainerStarted","Data":"b0642af09b416598cf3c521aa566c575abb3fdc058b09ec514d9813140846954"} Oct 14 13:37:09.851451 master-1 kubenswrapper[4740]: I1014 13:37:09.850337 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:09.852331 master-1 kubenswrapper[4740]: I1014 13:37:09.852278 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-888f485ff-gvvzl" event={"ID":"afd108f8-0a2c-427b-a952-7863fce4ffae","Type":"ContainerStarted","Data":"af1c6258587debeabd109ade1f90318013ffca6ce799a7f3148ddd9330c2b7c0"} Oct 14 13:37:09.897486 master-1 kubenswrapper[4740]: I1014 13:37:09.896819 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" podStartSLOduration=4.896797262 podStartE2EDuration="4.896797262s" podCreationTimestamp="2025-10-14 13:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:37:09.891180394 +0000 UTC m=+1855.701469723" watchObservedRunningTime="2025-10-14 13:37:09.896797262 +0000 UTC m=+1855.707086591" Oct 14 13:37:09.914782 master-1 kubenswrapper[4740]: I1014 13:37:09.914531 4740 scope.go:117] "RemoveContainer" containerID="08a3440b28f23a87a7abbee4ff111d2336a5d9279e3573bc43384fd314d2b7fb" Oct 14 13:37:10.863860 master-1 kubenswrapper[4740]: I1014 13:37:10.863770 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-888f485ff-gvvzl" event={"ID":"afd108f8-0a2c-427b-a952-7863fce4ffae","Type":"ContainerStarted","Data":"e7f1028e196807bf0aac9b699da5819aa62fc49bc9c9aab2a3bc56f698112c4a"} Oct 14 13:37:10.865067 master-1 kubenswrapper[4740]: I1014 13:37:10.865016 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:11.842561 master-1 kubenswrapper[4740]: I1014 13:37:11.842445 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:11.843412 master-1 kubenswrapper[4740]: I1014 13:37:11.843276 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:11.843640 master-1 kubenswrapper[4740]: I1014 13:37:11.843592 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:11.843731 master-1 kubenswrapper[4740]: I1014 13:37:11.843672 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:11.843804 master-1 kubenswrapper[4740]: I1014 13:37:11.843782 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:11.843873 master-1 kubenswrapper[4740]: I1014 13:37:11.843849 4740 scope.go:117] "RemoveContainer" containerID="e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d" Oct 14 13:37:11.843919 master-1 kubenswrapper[4740]: I1014 13:37:11.843877 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:11.843992 master-1 kubenswrapper[4740]: I1014 13:37:11.843978 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:11.844837 master-1 kubenswrapper[4740]: E1014 13:37:11.844192 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:37:11.891889 master-1 kubenswrapper[4740]: I1014 13:37:11.891813 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:11.893462 master-1 kubenswrapper[4740]: I1014 13:37:11.893307 4740 scope.go:117] "RemoveContainer" containerID="e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d" Oct 14 13:37:11.894062 master-1 kubenswrapper[4740]: E1014 13:37:11.894033 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:37:11.997654 master-1 kubenswrapper[4740]: I1014 13:37:11.997536 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-888f485ff-gvvzl" podStartSLOduration=4.242795271 podStartE2EDuration="5.997510593s" podCreationTimestamp="2025-10-14 13:37:06 +0000 UTC" firstStartedPulling="2025-10-14 13:37:08.87168445 +0000 UTC m=+1854.681973779" lastFinishedPulling="2025-10-14 13:37:10.626399772 +0000 UTC m=+1856.436689101" observedRunningTime="2025-10-14 13:37:10.901191617 +0000 UTC m=+1856.711480956" watchObservedRunningTime="2025-10-14 13:37:11.997510593 +0000 UTC m=+1857.807799922" Oct 14 13:37:13.261811 master-1 kubenswrapper[4740]: I1014 13:37:13.261653 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57bb6bd49-6mtw2"] Oct 14 13:37:13.272760 master-1 kubenswrapper[4740]: I1014 13:37:13.272696 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.277204 master-1 kubenswrapper[4740]: I1014 13:37:13.277160 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 14 13:37:13.277455 master-1 kubenswrapper[4740]: I1014 13:37:13.277404 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 14 13:37:13.331602 master-1 kubenswrapper[4740]: I1014 13:37:13.331516 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57bb6bd49-6mtw2"] Oct 14 13:37:13.358936 master-1 kubenswrapper[4740]: I1014 13:37:13.358826 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-ovndb-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.359132 master-1 kubenswrapper[4740]: I1014 13:37:13.359026 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wtp\" (UniqueName: \"kubernetes.io/projected/901bb247-1714-4b64-b981-0fffccbf6992-kube-api-access-72wtp\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.359132 master-1 kubenswrapper[4740]: I1014 13:37:13.359095 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-public-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.359458 master-1 kubenswrapper[4740]: I1014 13:37:13.359392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-httpd-config\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.359574 master-1 kubenswrapper[4740]: I1014 13:37:13.359540 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-internal-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.359648 master-1 kubenswrapper[4740]: I1014 13:37:13.359631 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-config\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.359698 master-1 kubenswrapper[4740]: I1014 13:37:13.359671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-combined-ca-bundle\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-httpd-config\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461173 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-internal-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461210 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-config\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461236 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-combined-ca-bundle\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-ovndb-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461331 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wtp\" (UniqueName: \"kubernetes.io/projected/901bb247-1714-4b64-b981-0fffccbf6992-kube-api-access-72wtp\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.462058 master-1 kubenswrapper[4740]: I1014 13:37:13.461362 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-public-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.465547 master-1 kubenswrapper[4740]: I1014 13:37:13.465483 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-config\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.465902 master-1 kubenswrapper[4740]: I1014 13:37:13.465851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-combined-ca-bundle\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.466375 master-1 kubenswrapper[4740]: I1014 13:37:13.465894 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-internal-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.466498 master-1 kubenswrapper[4740]: I1014 13:37:13.466447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-ovndb-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.467035 master-1 kubenswrapper[4740]: I1014 13:37:13.466943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-httpd-config\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.467123 master-1 kubenswrapper[4740]: I1014 13:37:13.467061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/901bb247-1714-4b64-b981-0fffccbf6992-public-tls-certs\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.486251 master-1 kubenswrapper[4740]: I1014 13:37:13.486131 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wtp\" (UniqueName: \"kubernetes.io/projected/901bb247-1714-4b64-b981-0fffccbf6992-kube-api-access-72wtp\") pod \"neutron-57bb6bd49-6mtw2\" (UID: \"901bb247-1714-4b64-b981-0fffccbf6992\") " pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:13.594078 master-1 kubenswrapper[4740]: I1014 13:37:13.593920 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:14.491589 master-1 kubenswrapper[4740]: W1014 13:37:14.486346 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod901bb247_1714_4b64_b981_0fffccbf6992.slice/crio-7d77d642f2a8b5241173ae132d88c42e3ea0538e5a64be591371cbbe7ca7563a WatchSource:0}: Error finding container 7d77d642f2a8b5241173ae132d88c42e3ea0538e5a64be591371cbbe7ca7563a: Status 404 returned error can't find the container with id 7d77d642f2a8b5241173ae132d88c42e3ea0538e5a64be591371cbbe7ca7563a Oct 14 13:37:14.640858 master-1 kubenswrapper[4740]: I1014 13:37:14.640775 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57bb6bd49-6mtw2"] Oct 14 13:37:14.940324 master-1 kubenswrapper[4740]: I1014 13:37:14.940227 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57bb6bd49-6mtw2" event={"ID":"901bb247-1714-4b64-b981-0fffccbf6992","Type":"ContainerStarted","Data":"79e4b69f607b4215ce063b63ff086ce05cbf8e8c6a1953695a9556b9cafdca06"} Oct 14 13:37:14.940324 master-1 kubenswrapper[4740]: I1014 13:37:14.940322 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57bb6bd49-6mtw2" event={"ID":"901bb247-1714-4b64-b981-0fffccbf6992","Type":"ContainerStarted","Data":"7d77d642f2a8b5241173ae132d88c42e3ea0538e5a64be591371cbbe7ca7563a"} Oct 14 13:37:15.951577 master-1 kubenswrapper[4740]: I1014 13:37:15.951529 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57bb6bd49-6mtw2" event={"ID":"901bb247-1714-4b64-b981-0fffccbf6992","Type":"ContainerStarted","Data":"dfce0adb6ac28da12b4677b49a3fd6a734b05e097bf5223fbce84f8047f66bb7"} Oct 14 13:37:15.952630 master-1 kubenswrapper[4740]: I1014 13:37:15.951710 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:16.097056 master-1 kubenswrapper[4740]: I1014 13:37:16.096974 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57bb6bd49-6mtw2" podStartSLOduration=3.096955262 podStartE2EDuration="3.096955262s" podCreationTimestamp="2025-10-14 13:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:37:16.069704335 +0000 UTC m=+1861.879993664" watchObservedRunningTime="2025-10-14 13:37:16.096955262 +0000 UTC m=+1861.907244591" Oct 14 13:37:18.415449 master-1 kubenswrapper[4740]: I1014 13:37:18.415348 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-888f485ff-gvvzl" Oct 14 13:37:21.944611 master-1 kubenswrapper[4740]: I1014 13:37:21.944524 4740 scope.go:117] "RemoveContainer" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" Oct 14 13:37:21.945664 master-1 kubenswrapper[4740]: E1014 13:37:21.945043 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:37:23.198012 master-1 kubenswrapper[4740]: I1014 13:37:23.197942 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-675bcd49b4-pn7dg"] Oct 14 13:37:23.198639 master-1 kubenswrapper[4740]: I1014 13:37:23.198487 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-675bcd49b4-pn7dg" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api-log" containerID="cri-o://fe568b18722cd611790e4cd90f989f5c0c3fb201de0d351de12d0f4deddaac5c" gracePeriod=60 Oct 14 13:37:23.198850 master-1 kubenswrapper[4740]: I1014 13:37:23.198783 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-675bcd49b4-pn7dg" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api" containerID="cri-o://0ea6b7ebb9f3754225faa51f61c305e498d797ffce16bac0b4921cb8e587bfb3" gracePeriod=60 Oct 14 13:37:24.050789 master-1 kubenswrapper[4740]: I1014 13:37:24.050712 4740 generic.go:334] "Generic (PLEG): container finished" podID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerID="fe568b18722cd611790e4cd90f989f5c0c3fb201de0d351de12d0f4deddaac5c" exitCode=143 Oct 14 13:37:24.050789 master-1 kubenswrapper[4740]: I1014 13:37:24.050788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerDied","Data":"fe568b18722cd611790e4cd90f989f5c0c3fb201de0d351de12d0f4deddaac5c"} Oct 14 13:37:24.584473 master-1 kubenswrapper[4740]: I1014 13:37:24.584383 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798b8945b9-285k5"] Oct 14 13:37:24.585134 master-1 kubenswrapper[4740]: I1014 13:37:24.584811 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-798b8945b9-285k5" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" containerName="dnsmasq-dns" containerID="cri-o://42167b7e73b6c2ca8670a08edb5911efbde5d940a12054566a79b028a350a11c" gracePeriod=10 Oct 14 13:37:24.945222 master-1 kubenswrapper[4740]: I1014 13:37:24.945058 4740 scope.go:117] "RemoveContainer" containerID="e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d" Oct 14 13:37:24.945509 master-1 kubenswrapper[4740]: I1014 13:37:24.945155 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:24.945694 master-1 kubenswrapper[4740]: E1014 13:37:24.945632 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=glance-httpd pod=glance-46645-default-internal-api-1_openstack(fbd3b301-ecc2-4099-846b-d9b6e7b6320d)\"" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" Oct 14 13:37:25.099716 master-1 kubenswrapper[4740]: I1014 13:37:25.099342 4740 generic.go:334] "Generic (PLEG): container finished" podID="83a644d5-c439-4938-8afb-e25b58786ea3" containerID="42167b7e73b6c2ca8670a08edb5911efbde5d940a12054566a79b028a350a11c" exitCode=0 Oct 14 13:37:25.099716 master-1 kubenswrapper[4740]: I1014 13:37:25.099402 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798b8945b9-285k5" event={"ID":"83a644d5-c439-4938-8afb-e25b58786ea3","Type":"ContainerDied","Data":"42167b7e73b6c2ca8670a08edb5911efbde5d940a12054566a79b028a350a11c"} Oct 14 13:37:25.605404 master-1 kubenswrapper[4740]: I1014 13:37:25.605329 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5fdd5c7f69-nmdkf" Oct 14 13:37:25.626446 master-1 kubenswrapper[4740]: I1014 13:37:25.625153 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:37:26.110224 master-1 kubenswrapper[4740]: I1014 13:37:26.110171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798b8945b9-285k5" event={"ID":"83a644d5-c439-4938-8afb-e25b58786ea3","Type":"ContainerDied","Data":"8c472fa5bb1861843e0a163d32ca27d3f223f6b0dcb822263486945743d501ba"} Oct 14 13:37:26.110477 master-1 kubenswrapper[4740]: I1014 13:37:26.110259 4740 scope.go:117] "RemoveContainer" containerID="42167b7e73b6c2ca8670a08edb5911efbde5d940a12054566a79b028a350a11c" Oct 14 13:37:26.110477 master-1 kubenswrapper[4740]: I1014 13:37:26.110323 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798b8945b9-285k5" Oct 14 13:37:26.139966 master-1 kubenswrapper[4740]: I1014 13:37:26.139915 4740 scope.go:117] "RemoveContainer" containerID="4f18119303bbd765ff611c77fcf9646c1aea81b4054c9b43a4c67a0362a165f6" Oct 14 13:37:27.136302 master-1 kubenswrapper[4740]: I1014 13:37:27.136249 4740 generic.go:334] "Generic (PLEG): container finished" podID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerID="0ea6b7ebb9f3754225faa51f61c305e498d797ffce16bac0b4921cb8e587bfb3" exitCode=0 Oct 14 13:37:27.137010 master-1 kubenswrapper[4740]: I1014 13:37:27.136988 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerDied","Data":"0ea6b7ebb9f3754225faa51f61c305e498d797ffce16bac0b4921cb8e587bfb3"} Oct 14 13:37:27.434653 master-1 kubenswrapper[4740]: I1014 13:37:27.433188 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:37:28.146077 master-1 kubenswrapper[4740]: I1014 13:37:28.146008 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-675bcd49b4-pn7dg" event={"ID":"2ea6549c-7eb4-4d05-9cd2-b9e448c39186","Type":"ContainerDied","Data":"32e2694c27e01f7d295452495678c8612a59e92c9e4958db99ac11687735b6a0"} Oct 14 13:37:28.146077 master-1 kubenswrapper[4740]: I1014 13:37:28.146062 4740 scope.go:117] "RemoveContainer" containerID="0ea6b7ebb9f3754225faa51f61c305e498d797ffce16bac0b4921cb8e587bfb3" Oct 14 13:37:28.147156 master-1 kubenswrapper[4740]: I1014 13:37:28.146166 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-675bcd49b4-pn7dg" Oct 14 13:37:28.179949 master-1 kubenswrapper[4740]: I1014 13:37:28.179891 4740 scope.go:117] "RemoveContainer" containerID="fe568b18722cd611790e4cd90f989f5c0c3fb201de0d351de12d0f4deddaac5c" Oct 14 13:37:28.213651 master-1 kubenswrapper[4740]: I1014 13:37:28.213555 4740 scope.go:117] "RemoveContainer" containerID="8b058597047e7fe49ad18607b90259fa3da40b0c4c624680bec3ba26687bbf31" Oct 14 13:37:28.438492 master-1 kubenswrapper[4740]: I1014 13:37:28.438136 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-scripts\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.438728 master-1 kubenswrapper[4740]: I1014 13:37:28.438530 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.438728 master-1 kubenswrapper[4740]: I1014 13:37:28.438610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-config\") pod \"83a644d5-c439-4938-8afb-e25b58786ea3\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " Oct 14 13:37:28.438728 master-1 kubenswrapper[4740]: I1014 13:37:28.438712 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-combined-ca-bundle\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.438838 master-1 kubenswrapper[4740]: I1014 13:37:28.438798 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-etc-podinfo\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.438873 master-1 kubenswrapper[4740]: I1014 13:37:28.438839 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z77p2\" (UniqueName: \"kubernetes.io/projected/83a644d5-c439-4938-8afb-e25b58786ea3-kube-api-access-z77p2\") pod \"83a644d5-c439-4938-8afb-e25b58786ea3\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.438940 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-sb\") pod \"83a644d5-c439-4938-8afb-e25b58786ea3\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.438987 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-custom\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.439056 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-nb\") pod \"83a644d5-c439-4938-8afb-e25b58786ea3\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.439092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-swift-storage-0\") pod \"83a644d5-c439-4938-8afb-e25b58786ea3\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.439132 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcddq\" (UniqueName: \"kubernetes.io/projected/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-kube-api-access-xcddq\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.439220 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-logs\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.439271 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-merged\") pod \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\" (UID: \"2ea6549c-7eb4-4d05-9cd2-b9e448c39186\") " Oct 14 13:37:28.439591 master-1 kubenswrapper[4740]: I1014 13:37:28.439314 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-svc\") pod \"83a644d5-c439-4938-8afb-e25b58786ea3\" (UID: \"83a644d5-c439-4938-8afb-e25b58786ea3\") " Oct 14 13:37:28.440456 master-1 kubenswrapper[4740]: I1014 13:37:28.440166 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-logs" (OuterVolumeSpecName: "logs") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:37:28.440861 master-1 kubenswrapper[4740]: I1014 13:37:28.440818 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.441065 master-1 kubenswrapper[4740]: I1014 13:37:28.441010 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:37:28.444899 master-1 kubenswrapper[4740]: I1014 13:37:28.444815 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-scripts" (OuterVolumeSpecName: "scripts") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:28.445224 master-1 kubenswrapper[4740]: I1014 13:37:28.445157 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a644d5-c439-4938-8afb-e25b58786ea3-kube-api-access-z77p2" (OuterVolumeSpecName: "kube-api-access-z77p2") pod "83a644d5-c439-4938-8afb-e25b58786ea3" (UID: "83a644d5-c439-4938-8afb-e25b58786ea3"). InnerVolumeSpecName "kube-api-access-z77p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:28.447097 master-1 kubenswrapper[4740]: I1014 13:37:28.446525 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 13:37:28.447168 master-1 kubenswrapper[4740]: I1014 13:37:28.447096 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-kube-api-access-xcddq" (OuterVolumeSpecName: "kube-api-access-xcddq") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "kube-api-access-xcddq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:28.450772 master-1 kubenswrapper[4740]: I1014 13:37:28.450711 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:28.482389 master-1 kubenswrapper[4740]: I1014 13:37:28.482201 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data" (OuterVolumeSpecName: "config-data") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:28.485464 master-1 kubenswrapper[4740]: I1014 13:37:28.485393 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83a644d5-c439-4938-8afb-e25b58786ea3" (UID: "83a644d5-c439-4938-8afb-e25b58786ea3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:37:28.487275 master-1 kubenswrapper[4740]: I1014 13:37:28.487184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-config" (OuterVolumeSpecName: "config") pod "83a644d5-c439-4938-8afb-e25b58786ea3" (UID: "83a644d5-c439-4938-8afb-e25b58786ea3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:37:28.494295 master-1 kubenswrapper[4740]: I1014 13:37:28.494137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83a644d5-c439-4938-8afb-e25b58786ea3" (UID: "83a644d5-c439-4938-8afb-e25b58786ea3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:37:28.497183 master-1 kubenswrapper[4740]: I1014 13:37:28.497067 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83a644d5-c439-4938-8afb-e25b58786ea3" (UID: "83a644d5-c439-4938-8afb-e25b58786ea3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:37:28.497564 master-1 kubenswrapper[4740]: I1014 13:37:28.497520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "83a644d5-c439-4938-8afb-e25b58786ea3" (UID: "83a644d5-c439-4938-8afb-e25b58786ea3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:37:28.508278 master-1 kubenswrapper[4740]: I1014 13:37:28.508145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea6549c-7eb4-4d05-9cd2-b9e448c39186" (UID: "2ea6549c-7eb4-4d05-9cd2-b9e448c39186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542545 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542601 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542616 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcddq\" (UniqueName: \"kubernetes.io/projected/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-kube-api-access-xcddq\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542624 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-merged\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542633 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542642 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542652 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542664 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542677 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542688 4740 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-etc-podinfo\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542697 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z77p2\" (UniqueName: \"kubernetes.io/projected/83a644d5-c439-4938-8afb-e25b58786ea3-kube-api-access-z77p2\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542707 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83a644d5-c439-4938-8afb-e25b58786ea3-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.543670 master-1 kubenswrapper[4740]: I1014 13:37:28.542715 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ea6549c-7eb4-4d05-9cd2-b9e448c39186-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:28.759287 master-1 kubenswrapper[4740]: I1014 13:37:28.759158 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7588d45c67-s98sq"] Oct 14 13:37:28.759571 master-1 kubenswrapper[4740]: I1014 13:37:28.759445 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7588d45c67-s98sq" podUID="2a30d790-7aaa-4754-a568-1a3804649217" containerName="heat-engine" containerID="cri-o://51771210c49aed6393c59011363ce93f2dcdafd2b01f355b3ad1bc2d81ce7fd7" gracePeriod=60 Oct 14 13:37:32.314897 master-1 kubenswrapper[4740]: I1014 13:37:32.314787 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ironic-675bcd49b4-pn7dg" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api" probeResult="failure" output="Get \"http://10.128.0.155:6385/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:37:33.943781 master-1 kubenswrapper[4740]: I1014 13:37:33.943704 4740 scope.go:117] "RemoveContainer" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" Oct 14 13:37:33.945424 master-1 kubenswrapper[4740]: E1014 13:37:33.944800 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:37:34.987467 master-1 kubenswrapper[4740]: I1014 13:37:34.987173 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-675bcd49b4-pn7dg"] Oct 14 13:37:35.412980 master-1 kubenswrapper[4740]: I1014 13:37:35.412778 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-675bcd49b4-pn7dg"] Oct 14 13:37:36.760011 master-1 kubenswrapper[4740]: I1014 13:37:36.759928 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:37:36.760706 master-1 kubenswrapper[4740]: I1014 13:37:36.760569 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" containerID="cri-o://364a84ee9bc58c48f672391b8539f8003048bb79ffba9a01103ff45c1e9d5b2c" gracePeriod=30 Oct 14 13:37:36.764446 master-1 kubenswrapper[4740]: I1014 13:37:36.764376 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:36.880600 master-1 kubenswrapper[4740]: I1014 13:37:36.880521 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798b8945b9-285k5"] Oct 14 13:37:36.962692 master-1 kubenswrapper[4740]: I1014 13:37:36.962500 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" path="/var/lib/kubelet/pods/2ea6549c-7eb4-4d05-9cd2-b9e448c39186/volumes" Oct 14 13:37:37.244940 master-1 kubenswrapper[4740]: I1014 13:37:37.244883 4740 generic.go:334] "Generic (PLEG): container finished" podID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerID="364a84ee9bc58c48f672391b8539f8003048bb79ffba9a01103ff45c1e9d5b2c" exitCode=143 Oct 14 13:37:37.245278 master-1 kubenswrapper[4740]: I1014 13:37:37.244961 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerDied","Data":"364a84ee9bc58c48f672391b8539f8003048bb79ffba9a01103ff45c1e9d5b2c"} Oct 14 13:37:37.248062 master-1 kubenswrapper[4740]: I1014 13:37:37.248003 4740 generic.go:334] "Generic (PLEG): container finished" podID="2a30d790-7aaa-4754-a568-1a3804649217" containerID="51771210c49aed6393c59011363ce93f2dcdafd2b01f355b3ad1bc2d81ce7fd7" exitCode=0 Oct 14 13:37:37.248139 master-1 kubenswrapper[4740]: I1014 13:37:37.248064 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7588d45c67-s98sq" event={"ID":"2a30d790-7aaa-4754-a568-1a3804649217","Type":"ContainerDied","Data":"51771210c49aed6393c59011363ce93f2dcdafd2b01f355b3ad1bc2d81ce7fd7"} Oct 14 13:37:37.576070 master-1 kubenswrapper[4740]: I1014 13:37:37.575993 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:37:37.756022 master-1 kubenswrapper[4740]: I1014 13:37:37.755781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data-custom\") pod \"2a30d790-7aaa-4754-a568-1a3804649217\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " Oct 14 13:37:37.756252 master-1 kubenswrapper[4740]: I1014 13:37:37.756094 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5wx2\" (UniqueName: \"kubernetes.io/projected/2a30d790-7aaa-4754-a568-1a3804649217-kube-api-access-h5wx2\") pod \"2a30d790-7aaa-4754-a568-1a3804649217\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " Oct 14 13:37:37.756252 master-1 kubenswrapper[4740]: I1014 13:37:37.756168 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-combined-ca-bundle\") pod \"2a30d790-7aaa-4754-a568-1a3804649217\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " Oct 14 13:37:37.756339 master-1 kubenswrapper[4740]: I1014 13:37:37.756290 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data\") pod \"2a30d790-7aaa-4754-a568-1a3804649217\" (UID: \"2a30d790-7aaa-4754-a568-1a3804649217\") " Oct 14 13:37:37.763305 master-1 kubenswrapper[4740]: I1014 13:37:37.759368 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a30d790-7aaa-4754-a568-1a3804649217" (UID: "2a30d790-7aaa-4754-a568-1a3804649217"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:37.763305 master-1 kubenswrapper[4740]: I1014 13:37:37.759864 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a30d790-7aaa-4754-a568-1a3804649217-kube-api-access-h5wx2" (OuterVolumeSpecName: "kube-api-access-h5wx2") pod "2a30d790-7aaa-4754-a568-1a3804649217" (UID: "2a30d790-7aaa-4754-a568-1a3804649217"). InnerVolumeSpecName "kube-api-access-h5wx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:37.783087 master-1 kubenswrapper[4740]: I1014 13:37:37.781768 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a30d790-7aaa-4754-a568-1a3804649217" (UID: "2a30d790-7aaa-4754-a568-1a3804649217"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:37.802623 master-1 kubenswrapper[4740]: I1014 13:37:37.802365 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data" (OuterVolumeSpecName: "config-data") pod "2a30d790-7aaa-4754-a568-1a3804649217" (UID: "2a30d790-7aaa-4754-a568-1a3804649217"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:37.859434 master-1 kubenswrapper[4740]: I1014 13:37:37.859355 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5wx2\" (UniqueName: \"kubernetes.io/projected/2a30d790-7aaa-4754-a568-1a3804649217-kube-api-access-h5wx2\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:37.859434 master-1 kubenswrapper[4740]: I1014 13:37:37.859425 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:37.859612 master-1 kubenswrapper[4740]: I1014 13:37:37.859445 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:37.859612 master-1 kubenswrapper[4740]: I1014 13:37:37.859463 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a30d790-7aaa-4754-a568-1a3804649217-config-data-custom\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:38.276052 master-1 kubenswrapper[4740]: I1014 13:37:38.275925 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7588d45c67-s98sq" event={"ID":"2a30d790-7aaa-4754-a568-1a3804649217","Type":"ContainerDied","Data":"25e6d4f9f82daf31a37398c8060aac77d2c685d18b6942d85a7a3df7b468378d"} Oct 14 13:37:38.276052 master-1 kubenswrapper[4740]: I1014 13:37:38.276025 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7588d45c67-s98sq" Oct 14 13:37:38.276472 master-1 kubenswrapper[4740]: I1014 13:37:38.276068 4740 scope.go:117] "RemoveContainer" containerID="51771210c49aed6393c59011363ce93f2dcdafd2b01f355b3ad1bc2d81ce7fd7" Oct 14 13:37:38.806989 master-1 kubenswrapper[4740]: I1014 13:37:38.806859 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-798b8945b9-285k5"] Oct 14 13:37:38.954539 master-1 kubenswrapper[4740]: I1014 13:37:38.954452 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" path="/var/lib/kubelet/pods/83a644d5-c439-4938-8afb-e25b58786ea3/volumes" Oct 14 13:37:41.843178 master-1 kubenswrapper[4740]: I1014 13:37:41.843083 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-46645-default-internal-api-1" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" probeResult="failure" output="Get \"http://10.128.0.156:9292/healthcheck\": dial tcp 10.128.0.156:9292: connect: connection refused" Oct 14 13:37:41.902914 master-1 kubenswrapper[4740]: I1014 13:37:41.902786 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7588d45c67-s98sq"] Oct 14 13:37:42.127484 master-1 kubenswrapper[4740]: I1014 13:37:42.127296 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7588d45c67-s98sq"] Oct 14 13:37:42.956958 master-1 kubenswrapper[4740]: I1014 13:37:42.956871 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a30d790-7aaa-4754-a568-1a3804649217" path="/var/lib/kubelet/pods/2a30d790-7aaa-4754-a568-1a3804649217/volumes" Oct 14 13:37:43.618553 master-1 kubenswrapper[4740]: I1014 13:37:43.618487 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57bb6bd49-6mtw2" Oct 14 13:37:44.028187 master-1 kubenswrapper[4740]: I1014 13:37:44.022838 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55c4fcb4cb-xfg9j"] Oct 14 13:37:44.030799 master-1 kubenswrapper[4740]: I1014 13:37:44.030149 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55c4fcb4cb-xfg9j" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-api" containerID="cri-o://a5fc8ecdc6b86053832093cfe590deac075689a8f33aebf33bb2e2ce8db6920c" gracePeriod=30 Oct 14 13:37:44.031634 master-1 kubenswrapper[4740]: I1014 13:37:44.030265 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55c4fcb4cb-xfg9j" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-httpd" containerID="cri-o://07fb7818c23e8c64e34cf8ad9848c0665dc4020ff4bf533314698979d8687fcf" gracePeriod=30 Oct 14 13:37:44.217436 master-1 kubenswrapper[4740]: I1014 13:37:44.217387 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:44.311537 master-1 kubenswrapper[4740]: I1014 13:37:44.311484 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-config-data\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.311767 master-1 kubenswrapper[4740]: I1014 13:37:44.311656 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-scripts\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.311767 master-1 kubenswrapper[4740]: I1014 13:37:44.311724 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbpcp\" (UniqueName: \"kubernetes.io/projected/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-kube-api-access-mbpcp\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.312196 master-1 kubenswrapper[4740]: I1014 13:37:44.312170 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.312305 master-1 kubenswrapper[4740]: I1014 13:37:44.312284 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-logs\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.312503 master-1 kubenswrapper[4740]: I1014 13:37:44.312472 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-httpd-run\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.312576 master-1 kubenswrapper[4740]: I1014 13:37:44.312538 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-combined-ca-bundle\") pod \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\" (UID: \"fbd3b301-ecc2-4099-846b-d9b6e7b6320d\") " Oct 14 13:37:44.313279 master-1 kubenswrapper[4740]: I1014 13:37:44.313215 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:37:44.314660 master-1 kubenswrapper[4740]: I1014 13:37:44.314621 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-logs" (OuterVolumeSpecName: "logs") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:37:44.315546 master-1 kubenswrapper[4740]: I1014 13:37:44.315492 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.315546 master-1 kubenswrapper[4740]: I1014 13:37:44.315523 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-httpd-run\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.317072 master-1 kubenswrapper[4740]: I1014 13:37:44.316467 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-scripts" (OuterVolumeSpecName: "scripts") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:44.321153 master-1 kubenswrapper[4740]: I1014 13:37:44.321059 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-kube-api-access-mbpcp" (OuterVolumeSpecName: "kube-api-access-mbpcp") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "kube-api-access-mbpcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:44.336161 master-1 kubenswrapper[4740]: I1014 13:37:44.336100 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638" (OuterVolumeSpecName: "glance") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "pvc-35cc00af-913d-4452-bde4-76f8c7c6579e". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 13:37:44.346804 master-1 kubenswrapper[4740]: I1014 13:37:44.346686 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:44.357718 master-1 kubenswrapper[4740]: I1014 13:37:44.357658 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:44.357936 master-1 kubenswrapper[4740]: I1014 13:37:44.357603 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"fbd3b301-ecc2-4099-846b-d9b6e7b6320d","Type":"ContainerDied","Data":"9e476382f771003447f81b5bed083830c3ccd39f68b10a151feb8a5a647e6b5c"} Oct 14 13:37:44.357936 master-1 kubenswrapper[4740]: I1014 13:37:44.357815 4740 scope.go:117] "RemoveContainer" containerID="e47740ea63cb208d7f49619be8fbd3b91e265be9577737fdfe3a0e7e4019dd7d" Oct 14 13:37:44.359885 master-1 kubenswrapper[4740]: I1014 13:37:44.359842 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-config-data" (OuterVolumeSpecName: "config-data") pod "fbd3b301-ecc2-4099-846b-d9b6e7b6320d" (UID: "fbd3b301-ecc2-4099-846b-d9b6e7b6320d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:44.362018 master-1 kubenswrapper[4740]: I1014 13:37:44.361975 4740 generic.go:334] "Generic (PLEG): container finished" podID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerID="07fb7818c23e8c64e34cf8ad9848c0665dc4020ff4bf533314698979d8687fcf" exitCode=0 Oct 14 13:37:44.362094 master-1 kubenswrapper[4740]: I1014 13:37:44.362028 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c4fcb4cb-xfg9j" event={"ID":"0401d960-0b3b-4a30-93de-4dc6064a8943","Type":"ContainerDied","Data":"07fb7818c23e8c64e34cf8ad9848c0665dc4020ff4bf533314698979d8687fcf"} Oct 14 13:37:44.417993 master-1 kubenswrapper[4740]: I1014 13:37:44.417927 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") on node \"master-1\" " Oct 14 13:37:44.418124 master-1 kubenswrapper[4740]: I1014 13:37:44.418013 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.418124 master-1 kubenswrapper[4740]: I1014 13:37:44.418034 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.418124 master-1 kubenswrapper[4740]: I1014 13:37:44.418046 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.418124 master-1 kubenswrapper[4740]: I1014 13:37:44.418060 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbpcp\" (UniqueName: \"kubernetes.io/projected/fbd3b301-ecc2-4099-846b-d9b6e7b6320d-kube-api-access-mbpcp\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.446354 master-1 kubenswrapper[4740]: I1014 13:37:44.445444 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 14 13:37:44.446354 master-1 kubenswrapper[4740]: I1014 13:37:44.445839 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-35cc00af-913d-4452-bde4-76f8c7c6579e" (UniqueName: "kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638") on node "master-1" Oct 14 13:37:44.458923 master-1 kubenswrapper[4740]: I1014 13:37:44.458770 4740 scope.go:117] "RemoveContainer" containerID="364a84ee9bc58c48f672391b8539f8003048bb79ffba9a01103ff45c1e9d5b2c" Oct 14 13:37:44.521407 master-1 kubenswrapper[4740]: I1014 13:37:44.521357 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:44.874138 master-1 kubenswrapper[4740]: I1014 13:37:44.873667 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:37:44.961312 master-1 kubenswrapper[4740]: I1014 13:37:44.960183 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.411203 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.411906 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.411928 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.411952 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" containerName="init" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.411961 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" containerName="init" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.411977 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" containerName="dnsmasq-dns" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.411986 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" containerName="dnsmasq-dns" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.412003 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="init" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.412010 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="init" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.412033 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a30d790-7aaa-4754-a568-1a3804649217" containerName="heat-engine" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.412073 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a30d790-7aaa-4754-a568-1a3804649217" containerName="heat-engine" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.412085 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.412091 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.412109 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api-log" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.412115 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api-log" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.412125 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.412131 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: E1014 13:37:45.412143 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.412164 master-1 kubenswrapper[4740]: I1014 13:37:45.412149 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412391 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a30d790-7aaa-4754-a568-1a3804649217" containerName="heat-engine" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412418 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a644d5-c439-4938-8afb-e25b58786ea3" containerName="dnsmasq-dns" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412442 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api-log" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412450 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412459 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-log" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412469 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea6549c-7eb4-4d05-9cd2-b9e448c39186" containerName="ironic-api" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412486 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412498 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: E1014 13:37:45.412714 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.413547 master-1 kubenswrapper[4740]: I1014 13:37:45.412726 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" containerName="glance-httpd" Oct 14 13:37:45.420980 master-1 kubenswrapper[4740]: I1014 13:37:45.420934 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.452868 master-1 kubenswrapper[4740]: I1014 13:37:45.452808 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 13:37:45.453131 master-1 kubenswrapper[4740]: I1014 13:37:45.453049 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-46645-default-internal-config-data" Oct 14 13:37:45.661944 master-1 kubenswrapper[4740]: I1014 13:37:45.661866 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:37:45.753934 master-1 kubenswrapper[4740]: I1014 13:37:45.753878 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qm5\" (UniqueName: \"kubernetes.io/projected/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-kube-api-access-x2qm5\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.753934 master-1 kubenswrapper[4740]: I1014 13:37:45.753931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-combined-ca-bundle\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.754198 master-1 kubenswrapper[4740]: I1014 13:37:45.753967 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-scripts\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.754906 master-1 kubenswrapper[4740]: I1014 13:37:45.754824 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-logs\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.755086 master-1 kubenswrapper[4740]: I1014 13:37:45.755040 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-httpd-run\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.755343 master-1 kubenswrapper[4740]: I1014 13:37:45.755307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-config-data\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.755686 master-1 kubenswrapper[4740]: I1014 13:37:45.755647 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.755736 master-1 kubenswrapper[4740]: I1014 13:37:45.755697 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-internal-tls-certs\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.856767 master-1 kubenswrapper[4740]: I1014 13:37:45.856714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.857257 master-1 kubenswrapper[4740]: I1014 13:37:45.857176 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-internal-tls-certs\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.857385 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qm5\" (UniqueName: \"kubernetes.io/projected/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-kube-api-access-x2qm5\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.857432 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-combined-ca-bundle\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.857476 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-scripts\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.857523 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-logs\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.857568 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-httpd-run\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.857630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-config-data\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.858629 master-1 kubenswrapper[4740]: I1014 13:37:45.858442 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-logs\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.863468 master-1 kubenswrapper[4740]: I1014 13:37:45.863396 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-httpd-run\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.864013 master-1 kubenswrapper[4740]: I1014 13:37:45.863948 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:37:45.864314 master-1 kubenswrapper[4740]: I1014 13:37:45.864027 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/92058cb70342f8fc4137d0387239503103ba12b3a1ee5489530157139323bc4f/globalmount\"" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.866708 master-1 kubenswrapper[4740]: I1014 13:37:45.866584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-config-data\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.871343 master-1 kubenswrapper[4740]: I1014 13:37:45.871290 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-combined-ca-bundle\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.874033 master-1 kubenswrapper[4740]: I1014 13:37:45.874002 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-scripts\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.882674 master-1 kubenswrapper[4740]: I1014 13:37:45.882606 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-internal-tls-certs\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:45.894742 master-1 kubenswrapper[4740]: I1014 13:37:45.894696 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qm5\" (UniqueName: \"kubernetes.io/projected/4a2751e7-a85f-4aec-9051-6bb7ebe85eb9-kube-api-access-x2qm5\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:46.768773 master-1 kubenswrapper[4740]: I1014 13:37:46.768700 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35cc00af-913d-4452-bde4-76f8c7c6579e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c954d986-4e37-4bc3-be83-2a2283748638\") pod \"glance-46645-default-internal-api-1\" (UID: \"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9\") " pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:46.937491 master-1 kubenswrapper[4740]: I1014 13:37:46.937357 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:46.969614 master-1 kubenswrapper[4740]: I1014 13:37:46.969552 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd3b301-ecc2-4099-846b-d9b6e7b6320d" path="/var/lib/kubelet/pods/fbd3b301-ecc2-4099-846b-d9b6e7b6320d/volumes" Oct 14 13:37:47.401161 master-1 kubenswrapper[4740]: I1014 13:37:47.401057 4740 generic.go:334] "Generic (PLEG): container finished" podID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerID="a5fc8ecdc6b86053832093cfe590deac075689a8f33aebf33bb2e2ce8db6920c" exitCode=0 Oct 14 13:37:47.401161 master-1 kubenswrapper[4740]: I1014 13:37:47.401111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c4fcb4cb-xfg9j" event={"ID":"0401d960-0b3b-4a30-93de-4dc6064a8943","Type":"ContainerDied","Data":"a5fc8ecdc6b86053832093cfe590deac075689a8f33aebf33bb2e2ce8db6920c"} Oct 14 13:37:47.918914 master-1 kubenswrapper[4740]: I1014 13:37:47.918072 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-internal-api-1"] Oct 14 13:37:47.930454 master-1 kubenswrapper[4740]: W1014 13:37:47.930380 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a2751e7_a85f_4aec_9051_6bb7ebe85eb9.slice/crio-aa1cead014e45fe70ff2445a64f0961025dfefb412d5fb84a0e585f9c2cec606 WatchSource:0}: Error finding container aa1cead014e45fe70ff2445a64f0961025dfefb412d5fb84a0e585f9c2cec606: Status 404 returned error can't find the container with id aa1cead014e45fe70ff2445a64f0961025dfefb412d5fb84a0e585f9c2cec606 Oct 14 13:37:48.352670 master-1 kubenswrapper[4740]: I1014 13:37:48.351748 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:37:48.424243 master-1 kubenswrapper[4740]: I1014 13:37:48.424000 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-combined-ca-bundle\") pod \"0401d960-0b3b-4a30-93de-4dc6064a8943\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " Oct 14 13:37:48.427820 master-1 kubenswrapper[4740]: I1014 13:37:48.424548 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-ovndb-tls-certs\") pod \"0401d960-0b3b-4a30-93de-4dc6064a8943\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " Oct 14 13:37:48.427820 master-1 kubenswrapper[4740]: I1014 13:37:48.424618 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-httpd-config\") pod \"0401d960-0b3b-4a30-93de-4dc6064a8943\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " Oct 14 13:37:48.427820 master-1 kubenswrapper[4740]: I1014 13:37:48.424696 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2mg9\" (UniqueName: \"kubernetes.io/projected/0401d960-0b3b-4a30-93de-4dc6064a8943-kube-api-access-p2mg9\") pod \"0401d960-0b3b-4a30-93de-4dc6064a8943\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " Oct 14 13:37:48.427820 master-1 kubenswrapper[4740]: I1014 13:37:48.425059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-config\") pod \"0401d960-0b3b-4a30-93de-4dc6064a8943\" (UID: \"0401d960-0b3b-4a30-93de-4dc6064a8943\") " Oct 14 13:37:48.427820 master-1 kubenswrapper[4740]: I1014 13:37:48.427390 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0401d960-0b3b-4a30-93de-4dc6064a8943" (UID: "0401d960-0b3b-4a30-93de-4dc6064a8943"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:48.431245 master-1 kubenswrapper[4740]: I1014 13:37:48.428893 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0401d960-0b3b-4a30-93de-4dc6064a8943-kube-api-access-p2mg9" (OuterVolumeSpecName: "kube-api-access-p2mg9") pod "0401d960-0b3b-4a30-93de-4dc6064a8943" (UID: "0401d960-0b3b-4a30-93de-4dc6064a8943"). InnerVolumeSpecName "kube-api-access-p2mg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:48.455258 master-1 kubenswrapper[4740]: I1014 13:37:48.452387 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9","Type":"ContainerStarted","Data":"aa1cead014e45fe70ff2445a64f0961025dfefb412d5fb84a0e585f9c2cec606"} Oct 14 13:37:48.456962 master-1 kubenswrapper[4740]: I1014 13:37:48.456132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c4fcb4cb-xfg9j" event={"ID":"0401d960-0b3b-4a30-93de-4dc6064a8943","Type":"ContainerDied","Data":"d2bb76996b0df7d1d10c465556d9194de87115973f9ee9e76910685ae5ec2966"} Oct 14 13:37:48.456962 master-1 kubenswrapper[4740]: I1014 13:37:48.456185 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c4fcb4cb-xfg9j" Oct 14 13:37:48.456962 master-1 kubenswrapper[4740]: I1014 13:37:48.456189 4740 scope.go:117] "RemoveContainer" containerID="07fb7818c23e8c64e34cf8ad9848c0665dc4020ff4bf533314698979d8687fcf" Oct 14 13:37:48.485692 master-1 kubenswrapper[4740]: I1014 13:37:48.481745 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-config" (OuterVolumeSpecName: "config") pod "0401d960-0b3b-4a30-93de-4dc6064a8943" (UID: "0401d960-0b3b-4a30-93de-4dc6064a8943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:48.490607 master-1 kubenswrapper[4740]: I1014 13:37:48.490495 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0401d960-0b3b-4a30-93de-4dc6064a8943" (UID: "0401d960-0b3b-4a30-93de-4dc6064a8943"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:48.503814 master-1 kubenswrapper[4740]: I1014 13:37:48.503633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0401d960-0b3b-4a30-93de-4dc6064a8943" (UID: "0401d960-0b3b-4a30-93de-4dc6064a8943"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:48.527963 master-1 kubenswrapper[4740]: I1014 13:37:48.527893 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2mg9\" (UniqueName: \"kubernetes.io/projected/0401d960-0b3b-4a30-93de-4dc6064a8943-kube-api-access-p2mg9\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:48.527963 master-1 kubenswrapper[4740]: I1014 13:37:48.527942 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:48.528247 master-1 kubenswrapper[4740]: I1014 13:37:48.527980 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:48.528247 master-1 kubenswrapper[4740]: I1014 13:37:48.527992 4740 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-ovndb-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:48.528247 master-1 kubenswrapper[4740]: I1014 13:37:48.528003 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0401d960-0b3b-4a30-93de-4dc6064a8943-httpd-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:37:48.539976 master-1 kubenswrapper[4740]: I1014 13:37:48.539932 4740 scope.go:117] "RemoveContainer" containerID="a5fc8ecdc6b86053832093cfe590deac075689a8f33aebf33bb2e2ce8db6920c" Oct 14 13:37:48.815632 master-1 kubenswrapper[4740]: I1014 13:37:48.815569 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55c4fcb4cb-xfg9j"] Oct 14 13:37:48.823410 master-1 kubenswrapper[4740]: I1014 13:37:48.823330 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55c4fcb4cb-xfg9j"] Oct 14 13:37:48.944738 master-1 kubenswrapper[4740]: I1014 13:37:48.944562 4740 scope.go:117] "RemoveContainer" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" Oct 14 13:37:48.945278 master-1 kubenswrapper[4740]: E1014 13:37:48.944905 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:37:48.960636 master-1 kubenswrapper[4740]: I1014 13:37:48.960547 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" path="/var/lib/kubelet/pods/0401d960-0b3b-4a30-93de-4dc6064a8943/volumes" Oct 14 13:37:49.466184 master-1 kubenswrapper[4740]: I1014 13:37:49.466122 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9","Type":"ContainerStarted","Data":"77300c8218ea65fbb015d079534e800c160a246d0981cb56073eac9db3d88eff"} Oct 14 13:37:49.466184 master-1 kubenswrapper[4740]: I1014 13:37:49.466180 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-internal-api-1" event={"ID":"4a2751e7-a85f-4aec-9051-6bb7ebe85eb9","Type":"ContainerStarted","Data":"daab1d34c60abc4b2565e3f1d8584d9115344f79d56108f9a634b0b8150d0ccb"} Oct 14 13:37:49.510258 master-1 kubenswrapper[4740]: I1014 13:37:49.507707 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-46645-default-internal-api-1" podStartSLOduration=5.50768585 podStartE2EDuration="5.50768585s" podCreationTimestamp="2025-10-14 13:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:37:49.497988635 +0000 UTC m=+1895.308277974" watchObservedRunningTime="2025-10-14 13:37:49.50768585 +0000 UTC m=+1895.317975179" Oct 14 13:37:56.938907 master-1 kubenswrapper[4740]: I1014 13:37:56.938796 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:56.938907 master-1 kubenswrapper[4740]: I1014 13:37:56.938903 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:56.990290 master-1 kubenswrapper[4740]: I1014 13:37:56.990188 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:57.023487 master-1 kubenswrapper[4740]: I1014 13:37:57.023388 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:57.549889 master-1 kubenswrapper[4740]: I1014 13:37:57.549820 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:57.550329 master-1 kubenswrapper[4740]: I1014 13:37:57.549910 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:58.441743 master-1 kubenswrapper[4740]: I1014 13:37:58.441633 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:37:58.442417 master-1 kubenswrapper[4740]: I1014 13:37:58.442123 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-46645-default-external-api-0" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-log" containerID="cri-o://379b6c835b4e8f13348bf16b176f146a071805fa9ab4a6f04530b02ffd6f3ad5" gracePeriod=30 Oct 14 13:37:58.442481 master-1 kubenswrapper[4740]: I1014 13:37:58.442323 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-46645-default-external-api-0" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-httpd" containerID="cri-o://fd21787fe173e7d31edd4b3c041226299c7a231183c4f3be230b32205cea12e3" gracePeriod=30 Oct 14 13:37:58.861902 master-1 kubenswrapper[4740]: I1014 13:37:58.861798 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-vgnvk"] Oct 14 13:37:58.862656 master-1 kubenswrapper[4740]: E1014 13:37:58.862370 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-api" Oct 14 13:37:58.862745 master-1 kubenswrapper[4740]: I1014 13:37:58.862675 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-api" Oct 14 13:37:58.862745 master-1 kubenswrapper[4740]: E1014 13:37:58.862731 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-httpd" Oct 14 13:37:58.862837 master-1 kubenswrapper[4740]: I1014 13:37:58.862752 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-httpd" Oct 14 13:37:58.863256 master-1 kubenswrapper[4740]: I1014 13:37:58.863072 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-api" Oct 14 13:37:58.863256 master-1 kubenswrapper[4740]: I1014 13:37:58.863128 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0401d960-0b3b-4a30-93de-4dc6064a8943" containerName="neutron-httpd" Oct 14 13:37:58.864170 master-1 kubenswrapper[4740]: I1014 13:37:58.864129 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vgnvk" Oct 14 13:37:58.876060 master-1 kubenswrapper[4740]: I1014 13:37:58.875970 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vgnvk"] Oct 14 13:37:58.981300 master-1 kubenswrapper[4740]: I1014 13:37:58.981173 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfvt\" (UniqueName: \"kubernetes.io/projected/abc0b252-d950-4ddd-8788-4fdc12cce585-kube-api-access-8cfvt\") pod \"aodh-db-create-vgnvk\" (UID: \"abc0b252-d950-4ddd-8788-4fdc12cce585\") " pod="openstack/aodh-db-create-vgnvk" Oct 14 13:37:59.083371 master-1 kubenswrapper[4740]: I1014 13:37:59.083295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfvt\" (UniqueName: \"kubernetes.io/projected/abc0b252-d950-4ddd-8788-4fdc12cce585-kube-api-access-8cfvt\") pod \"aodh-db-create-vgnvk\" (UID: \"abc0b252-d950-4ddd-8788-4fdc12cce585\") " pod="openstack/aodh-db-create-vgnvk" Oct 14 13:37:59.109482 master-1 kubenswrapper[4740]: I1014 13:37:59.109418 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfvt\" (UniqueName: \"kubernetes.io/projected/abc0b252-d950-4ddd-8788-4fdc12cce585-kube-api-access-8cfvt\") pod \"aodh-db-create-vgnvk\" (UID: \"abc0b252-d950-4ddd-8788-4fdc12cce585\") " pod="openstack/aodh-db-create-vgnvk" Oct 14 13:37:59.188258 master-1 kubenswrapper[4740]: I1014 13:37:59.188042 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vgnvk" Oct 14 13:37:59.557401 master-1 kubenswrapper[4740]: I1014 13:37:59.557300 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:59.570043 master-1 kubenswrapper[4740]: I1014 13:37:59.569877 4740 generic.go:334] "Generic (PLEG): container finished" podID="e230307d-3fb2-44c5-8259-563e509c9f68" containerID="379b6c835b4e8f13348bf16b176f146a071805fa9ab4a6f04530b02ffd6f3ad5" exitCode=143 Oct 14 13:37:59.570043 master-1 kubenswrapper[4740]: I1014 13:37:59.570045 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:37:59.571130 master-1 kubenswrapper[4740]: I1014 13:37:59.571091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"e230307d-3fb2-44c5-8259-563e509c9f68","Type":"ContainerDied","Data":"379b6c835b4e8f13348bf16b176f146a071805fa9ab4a6f04530b02ffd6f3ad5"} Oct 14 13:37:59.595122 master-1 kubenswrapper[4740]: I1014 13:37:59.595029 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-46645-default-internal-api-1" Oct 14 13:37:59.666458 master-1 kubenswrapper[4740]: W1014 13:37:59.666396 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabc0b252_d950_4ddd_8788_4fdc12cce585.slice/crio-605be265ae512f354f72ea29283390097e07c17caea3f31e0191e3250696df09 WatchSource:0}: Error finding container 605be265ae512f354f72ea29283390097e07c17caea3f31e0191e3250696df09: Status 404 returned error can't find the container with id 605be265ae512f354f72ea29283390097e07c17caea3f31e0191e3250696df09 Oct 14 13:37:59.786448 master-1 kubenswrapper[4740]: I1014 13:37:59.786350 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vgnvk"] Oct 14 13:38:00.593663 master-1 kubenswrapper[4740]: I1014 13:38:00.593584 4740 generic.go:334] "Generic (PLEG): container finished" podID="abc0b252-d950-4ddd-8788-4fdc12cce585" containerID="ac82c31ac2185f1368e4846fddb7cbe03a10a34702628bccf6254a9f9bcc044e" exitCode=0 Oct 14 13:38:00.593663 master-1 kubenswrapper[4740]: I1014 13:38:00.593638 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vgnvk" event={"ID":"abc0b252-d950-4ddd-8788-4fdc12cce585","Type":"ContainerDied","Data":"ac82c31ac2185f1368e4846fddb7cbe03a10a34702628bccf6254a9f9bcc044e"} Oct 14 13:38:00.594875 master-1 kubenswrapper[4740]: I1014 13:38:00.593703 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vgnvk" event={"ID":"abc0b252-d950-4ddd-8788-4fdc12cce585","Type":"ContainerStarted","Data":"605be265ae512f354f72ea29283390097e07c17caea3f31e0191e3250696df09"} Oct 14 13:38:02.412768 master-1 kubenswrapper[4740]: I1014 13:38:02.412693 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vgnvk" Oct 14 13:38:02.490944 master-1 kubenswrapper[4740]: I1014 13:38:02.490880 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cfvt\" (UniqueName: \"kubernetes.io/projected/abc0b252-d950-4ddd-8788-4fdc12cce585-kube-api-access-8cfvt\") pod \"abc0b252-d950-4ddd-8788-4fdc12cce585\" (UID: \"abc0b252-d950-4ddd-8788-4fdc12cce585\") " Oct 14 13:38:02.497656 master-1 kubenswrapper[4740]: I1014 13:38:02.497569 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc0b252-d950-4ddd-8788-4fdc12cce585-kube-api-access-8cfvt" (OuterVolumeSpecName: "kube-api-access-8cfvt") pod "abc0b252-d950-4ddd-8788-4fdc12cce585" (UID: "abc0b252-d950-4ddd-8788-4fdc12cce585"). InnerVolumeSpecName "kube-api-access-8cfvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:02.594638 master-1 kubenswrapper[4740]: I1014 13:38:02.594516 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cfvt\" (UniqueName: \"kubernetes.io/projected/abc0b252-d950-4ddd-8788-4fdc12cce585-kube-api-access-8cfvt\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.620259 master-1 kubenswrapper[4740]: I1014 13:38:02.620174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vgnvk" event={"ID":"abc0b252-d950-4ddd-8788-4fdc12cce585","Type":"ContainerDied","Data":"605be265ae512f354f72ea29283390097e07c17caea3f31e0191e3250696df09"} Oct 14 13:38:02.620560 master-1 kubenswrapper[4740]: I1014 13:38:02.620290 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="605be265ae512f354f72ea29283390097e07c17caea3f31e0191e3250696df09" Oct 14 13:38:02.620560 master-1 kubenswrapper[4740]: I1014 13:38:02.620393 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vgnvk" Oct 14 13:38:02.633310 master-1 kubenswrapper[4740]: I1014 13:38:02.633264 4740 generic.go:334] "Generic (PLEG): container finished" podID="e230307d-3fb2-44c5-8259-563e509c9f68" containerID="fd21787fe173e7d31edd4b3c041226299c7a231183c4f3be230b32205cea12e3" exitCode=0 Oct 14 13:38:02.633658 master-1 kubenswrapper[4740]: I1014 13:38:02.633380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"e230307d-3fb2-44c5-8259-563e509c9f68","Type":"ContainerDied","Data":"fd21787fe173e7d31edd4b3c041226299c7a231183c4f3be230b32205cea12e3"} Oct 14 13:38:02.735119 master-1 kubenswrapper[4740]: I1014 13:38:02.735068 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:02.799930 master-1 kubenswrapper[4740]: I1014 13:38:02.799831 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-combined-ca-bundle\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.800265 master-1 kubenswrapper[4740]: I1014 13:38:02.800103 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.800265 master-1 kubenswrapper[4740]: I1014 13:38:02.800194 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-config-data\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.800265 master-1 kubenswrapper[4740]: I1014 13:38:02.800255 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-logs\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.800395 master-1 kubenswrapper[4740]: I1014 13:38:02.800303 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-httpd-run\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.800395 master-1 kubenswrapper[4740]: I1014 13:38:02.800385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-scripts\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.801283 master-1 kubenswrapper[4740]: I1014 13:38:02.800471 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5gbn\" (UniqueName: \"kubernetes.io/projected/e230307d-3fb2-44c5-8259-563e509c9f68-kube-api-access-r5gbn\") pod \"e230307d-3fb2-44c5-8259-563e509c9f68\" (UID: \"e230307d-3fb2-44c5-8259-563e509c9f68\") " Oct 14 13:38:02.801283 master-1 kubenswrapper[4740]: I1014 13:38:02.801009 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:02.801283 master-1 kubenswrapper[4740]: I1014 13:38:02.801159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-logs" (OuterVolumeSpecName: "logs") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:02.805495 master-1 kubenswrapper[4740]: I1014 13:38:02.805249 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-scripts" (OuterVolumeSpecName: "scripts") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:02.807633 master-1 kubenswrapper[4740]: I1014 13:38:02.807568 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e230307d-3fb2-44c5-8259-563e509c9f68-kube-api-access-r5gbn" (OuterVolumeSpecName: "kube-api-access-r5gbn") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "kube-api-access-r5gbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:02.825182 master-1 kubenswrapper[4740]: I1014 13:38:02.825117 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794" (OuterVolumeSpecName: "glance") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 13:38:02.838773 master-1 kubenswrapper[4740]: I1014 13:38:02.838462 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:02.871082 master-1 kubenswrapper[4740]: I1014 13:38:02.870940 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-config-data" (OuterVolumeSpecName: "config-data") pod "e230307d-3fb2-44c5-8259-563e509c9f68" (UID: "e230307d-3fb2-44c5-8259-563e509c9f68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:02.903482 master-1 kubenswrapper[4740]: I1014 13:38:02.903420 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.903871 master-1 kubenswrapper[4740]: I1014 13:38:02.903854 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") on node \"master-1\" " Oct 14 13:38:02.904035 master-1 kubenswrapper[4740]: I1014 13:38:02.904018 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.904181 master-1 kubenswrapper[4740]: I1014 13:38:02.904166 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.904335 master-1 kubenswrapper[4740]: I1014 13:38:02.904324 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e230307d-3fb2-44c5-8259-563e509c9f68-httpd-run\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.904424 master-1 kubenswrapper[4740]: I1014 13:38:02.904414 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e230307d-3fb2-44c5-8259-563e509c9f68-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.904503 master-1 kubenswrapper[4740]: I1014 13:38:02.904493 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5gbn\" (UniqueName: \"kubernetes.io/projected/e230307d-3fb2-44c5-8259-563e509c9f68-kube-api-access-r5gbn\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:02.924023 master-1 kubenswrapper[4740]: I1014 13:38:02.924005 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 14 13:38:02.924351 master-1 kubenswrapper[4740]: I1014 13:38:02.924337 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd" (UniqueName: "kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794") on node "master-1" Oct 14 13:38:03.006870 master-1 kubenswrapper[4740]: I1014 13:38:03.006764 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:03.645461 master-1 kubenswrapper[4740]: I1014 13:38:03.645391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"e230307d-3fb2-44c5-8259-563e509c9f68","Type":"ContainerDied","Data":"936e517acfa6466126b44ca7a20619dfd79298ad00adadb9fd2115b3a07b87f8"} Oct 14 13:38:03.645461 master-1 kubenswrapper[4740]: I1014 13:38:03.645454 4740 scope.go:117] "RemoveContainer" containerID="fd21787fe173e7d31edd4b3c041226299c7a231183c4f3be230b32205cea12e3" Oct 14 13:38:03.646402 master-1 kubenswrapper[4740]: I1014 13:38:03.645552 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.672468 master-1 kubenswrapper[4740]: I1014 13:38:03.671970 4740 scope.go:117] "RemoveContainer" containerID="379b6c835b4e8f13348bf16b176f146a071805fa9ab4a6f04530b02ffd6f3ad5" Oct 14 13:38:03.693251 master-1 kubenswrapper[4740]: I1014 13:38:03.693143 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:38:03.740942 master-1 kubenswrapper[4740]: I1014 13:38:03.740855 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:38:03.792093 master-1 kubenswrapper[4740]: I1014 13:38:03.791983 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:38:03.793348 master-1 kubenswrapper[4740]: E1014 13:38:03.793286 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-httpd" Oct 14 13:38:03.793348 master-1 kubenswrapper[4740]: I1014 13:38:03.793344 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-httpd" Oct 14 13:38:03.793543 master-1 kubenswrapper[4740]: E1014 13:38:03.793388 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-log" Oct 14 13:38:03.793627 master-1 kubenswrapper[4740]: I1014 13:38:03.793552 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-log" Oct 14 13:38:03.793627 master-1 kubenswrapper[4740]: E1014 13:38:03.793585 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc0b252-d950-4ddd-8788-4fdc12cce585" containerName="mariadb-database-create" Oct 14 13:38:03.793627 master-1 kubenswrapper[4740]: I1014 13:38:03.793604 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc0b252-d950-4ddd-8788-4fdc12cce585" containerName="mariadb-database-create" Oct 14 13:38:03.793984 master-1 kubenswrapper[4740]: I1014 13:38:03.793938 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc0b252-d950-4ddd-8788-4fdc12cce585" containerName="mariadb-database-create" Oct 14 13:38:03.794070 master-1 kubenswrapper[4740]: I1014 13:38:03.793986 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-httpd" Oct 14 13:38:03.794070 master-1 kubenswrapper[4740]: I1014 13:38:03.794005 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" containerName="glance-log" Oct 14 13:38:03.795790 master-1 kubenswrapper[4740]: I1014 13:38:03.795738 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.799336 master-1 kubenswrapper[4740]: I1014 13:38:03.799265 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 13:38:03.799490 master-1 kubenswrapper[4740]: I1014 13:38:03.799419 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-46645-default-external-config-data" Oct 14 13:38:03.927567 master-1 kubenswrapper[4740]: I1014 13:38:03.927428 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3ef91b-33a2-4ebc-ba71-f798671033d6-logs\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.927805 master-1 kubenswrapper[4740]: I1014 13:38:03.927628 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xdvk\" (UniqueName: \"kubernetes.io/projected/3c3ef91b-33a2-4ebc-ba71-f798671033d6-kube-api-access-7xdvk\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.927907 master-1 kubenswrapper[4740]: I1014 13:38:03.927861 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.928045 master-1 kubenswrapper[4740]: I1014 13:38:03.928009 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-public-tls-certs\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.928128 master-1 kubenswrapper[4740]: I1014 13:38:03.928090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-scripts\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.928214 master-1 kubenswrapper[4740]: I1014 13:38:03.928189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-config-data\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.928214 master-1 kubenswrapper[4740]: I1014 13:38:03.928211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c3ef91b-33a2-4ebc-ba71-f798671033d6-httpd-run\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.928389 master-1 kubenswrapper[4740]: I1014 13:38:03.928350 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-combined-ca-bundle\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:03.943856 master-1 kubenswrapper[4740]: I1014 13:38:03.943776 4740 scope.go:117] "RemoveContainer" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" Oct 14 13:38:03.982198 master-1 kubenswrapper[4740]: I1014 13:38:03.980863 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:38:04.031675 master-1 kubenswrapper[4740]: I1014 13:38:04.031625 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3ef91b-33a2-4ebc-ba71-f798671033d6-logs\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.031885 master-1 kubenswrapper[4740]: I1014 13:38:04.030896 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3ef91b-33a2-4ebc-ba71-f798671033d6-logs\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.032437 master-1 kubenswrapper[4740]: I1014 13:38:04.032400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xdvk\" (UniqueName: \"kubernetes.io/projected/3c3ef91b-33a2-4ebc-ba71-f798671033d6-kube-api-access-7xdvk\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.032947 master-1 kubenswrapper[4740]: I1014 13:38:04.032912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.033840 master-1 kubenswrapper[4740]: I1014 13:38:04.033808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-public-tls-certs\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.033907 master-1 kubenswrapper[4740]: I1014 13:38:04.033865 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-scripts\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.033960 master-1 kubenswrapper[4740]: I1014 13:38:04.033923 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-config-data\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.033960 master-1 kubenswrapper[4740]: I1014 13:38:04.033943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c3ef91b-33a2-4ebc-ba71-f798671033d6-httpd-run\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.035139 master-1 kubenswrapper[4740]: I1014 13:38:04.033965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-combined-ca-bundle\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.035139 master-1 kubenswrapper[4740]: I1014 13:38:04.034659 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c3ef91b-33a2-4ebc-ba71-f798671033d6-httpd-run\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.035339 master-1 kubenswrapper[4740]: I1014 13:38:04.035311 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:38:04.035431 master-1 kubenswrapper[4740]: I1014 13:38:04.035377 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/108a495d9f41cc9de81d4e0f645aaa659a8dff504f4fe9597cfbed6c597a62b0/globalmount\"" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.037694 master-1 kubenswrapper[4740]: I1014 13:38:04.037656 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-public-tls-certs\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.043481 master-1 kubenswrapper[4740]: I1014 13:38:04.040394 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-combined-ca-bundle\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.043481 master-1 kubenswrapper[4740]: I1014 13:38:04.041078 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-config-data\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.043481 master-1 kubenswrapper[4740]: I1014 13:38:04.041367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3ef91b-33a2-4ebc-ba71-f798671033d6-scripts\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.057094 master-1 kubenswrapper[4740]: I1014 13:38:04.057004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xdvk\" (UniqueName: \"kubernetes.io/projected/3c3ef91b-33a2-4ebc-ba71-f798671033d6-kube-api-access-7xdvk\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:04.661050 master-1 kubenswrapper[4740]: I1014 13:38:04.660965 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerStarted","Data":"1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9"} Oct 14 13:38:04.960938 master-1 kubenswrapper[4740]: I1014 13:38:04.960744 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e230307d-3fb2-44c5-8259-563e509c9f68" path="/var/lib/kubelet/pods/e230307d-3fb2-44c5-8259-563e509c9f68/volumes" Oct 14 13:38:05.456535 master-1 kubenswrapper[4740]: I1014 13:38:05.456454 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8aa4f972-3f9d-4a9c-a73a-c5f7a791f1bd\" (UniqueName: \"kubernetes.io/csi/topolvm.io^127288ef-94ab-46c0-9502-901e53f88794\") pod \"glance-46645-default-external-api-0\" (UID: \"3c3ef91b-33a2-4ebc-ba71-f798671033d6\") " pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:05.621768 master-1 kubenswrapper[4740]: I1014 13:38:05.621706 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:05.676695 master-1 kubenswrapper[4740]: I1014 13:38:05.676622 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" exitCode=1 Oct 14 13:38:05.677656 master-1 kubenswrapper[4740]: I1014 13:38:05.676690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9"} Oct 14 13:38:05.677656 master-1 kubenswrapper[4740]: I1014 13:38:05.676794 4740 scope.go:117] "RemoveContainer" containerID="6931e983e7605014604e2cd6306b4b425b2b55bb9bbf5f6fa8c224eaa85a35b6" Oct 14 13:38:05.683282 master-1 kubenswrapper[4740]: I1014 13:38:05.683148 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:38:05.683549 master-1 kubenswrapper[4740]: E1014 13:38:05.683503 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:38:06.200837 master-1 kubenswrapper[4740]: I1014 13:38:06.200759 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46645-default-external-api-0"] Oct 14 13:38:06.206716 master-1 kubenswrapper[4740]: W1014 13:38:06.206630 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3ef91b_33a2_4ebc_ba71_f798671033d6.slice/crio-f30b5cc2a034ac7db03b2d50bf261460dd17b0f738ad5fefc640962aa02172ec WatchSource:0}: Error finding container f30b5cc2a034ac7db03b2d50bf261460dd17b0f738ad5fefc640962aa02172ec: Status 404 returned error can't find the container with id f30b5cc2a034ac7db03b2d50bf261460dd17b0f738ad5fefc640962aa02172ec Oct 14 13:38:06.690280 master-1 kubenswrapper[4740]: I1014 13:38:06.690149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"3c3ef91b-33a2-4ebc-ba71-f798671033d6","Type":"ContainerStarted","Data":"f30b5cc2a034ac7db03b2d50bf261460dd17b0f738ad5fefc640962aa02172ec"} Oct 14 13:38:07.702605 master-1 kubenswrapper[4740]: I1014 13:38:07.702538 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"3c3ef91b-33a2-4ebc-ba71-f798671033d6","Type":"ContainerStarted","Data":"f284b6e20c7ad1560325b2bede0020c0d5fadc1afa4367c3e1f2bc622f4c26f6"} Oct 14 13:38:07.702605 master-1 kubenswrapper[4740]: I1014 13:38:07.702603 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46645-default-external-api-0" event={"ID":"3c3ef91b-33a2-4ebc-ba71-f798671033d6","Type":"ContainerStarted","Data":"3d78527d1ae80f4f21c1d1e6941b2b3f91f9e74dcc876857035dcab3714d3ecf"} Oct 14 13:38:07.868790 master-1 kubenswrapper[4740]: I1014 13:38:07.868650 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-46645-default-external-api-0" podStartSLOduration=4.868624325 podStartE2EDuration="4.868624325s" podCreationTimestamp="2025-10-14 13:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:38:07.738342715 +0000 UTC m=+1913.548632044" watchObservedRunningTime="2025-10-14 13:38:07.868624325 +0000 UTC m=+1913.678913654" Oct 14 13:38:09.000713 master-1 kubenswrapper[4740]: I1014 13:38:09.000501 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-d329-account-create-g8tsl"] Oct 14 13:38:09.003468 master-1 kubenswrapper[4740]: I1014 13:38:09.003324 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:09.012455 master-1 kubenswrapper[4740]: I1014 13:38:09.012216 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d329-account-create-g8tsl"] Oct 14 13:38:09.048333 master-1 kubenswrapper[4740]: I1014 13:38:09.048033 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Oct 14 13:38:09.049457 master-1 kubenswrapper[4740]: I1014 13:38:09.049410 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9zpg\" (UniqueName: \"kubernetes.io/projected/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4-kube-api-access-w9zpg\") pod \"aodh-d329-account-create-g8tsl\" (UID: \"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4\") " pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:09.152207 master-1 kubenswrapper[4740]: I1014 13:38:09.152068 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zpg\" (UniqueName: \"kubernetes.io/projected/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4-kube-api-access-w9zpg\") pod \"aodh-d329-account-create-g8tsl\" (UID: \"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4\") " pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:09.174417 master-1 kubenswrapper[4740]: I1014 13:38:09.174344 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zpg\" (UniqueName: \"kubernetes.io/projected/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4-kube-api-access-w9zpg\") pod \"aodh-d329-account-create-g8tsl\" (UID: \"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4\") " pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:09.370111 master-1 kubenswrapper[4740]: I1014 13:38:09.369949 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:09.890824 master-1 kubenswrapper[4740]: I1014 13:38:09.890745 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-d329-account-create-g8tsl"] Oct 14 13:38:10.742848 master-1 kubenswrapper[4740]: I1014 13:38:10.742755 4740 generic.go:334] "Generic (PLEG): container finished" podID="cfc4b770-906b-416c-9f2b-a9a4bfbea3b4" containerID="70303063508b347fa472e66ad393f2aceb33c9134e23bec67dfa14ec1c5ce52c" exitCode=0 Oct 14 13:38:10.742848 master-1 kubenswrapper[4740]: I1014 13:38:10.742822 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d329-account-create-g8tsl" event={"ID":"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4","Type":"ContainerDied","Data":"70303063508b347fa472e66ad393f2aceb33c9134e23bec67dfa14ec1c5ce52c"} Oct 14 13:38:10.743990 master-1 kubenswrapper[4740]: I1014 13:38:10.742886 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d329-account-create-g8tsl" event={"ID":"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4","Type":"ContainerStarted","Data":"523e8cf19fcaecba70898d37db754070119b81f75fbbbb494e34eac9943bf213"} Oct 14 13:38:12.623632 master-1 kubenswrapper[4740]: I1014 13:38:12.623536 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:12.743962 master-1 kubenswrapper[4740]: I1014 13:38:12.743893 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9zpg\" (UniqueName: \"kubernetes.io/projected/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4-kube-api-access-w9zpg\") pod \"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4\" (UID: \"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4\") " Oct 14 13:38:12.747822 master-1 kubenswrapper[4740]: I1014 13:38:12.747707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4-kube-api-access-w9zpg" (OuterVolumeSpecName: "kube-api-access-w9zpg") pod "cfc4b770-906b-416c-9f2b-a9a4bfbea3b4" (UID: "cfc4b770-906b-416c-9f2b-a9a4bfbea3b4"). InnerVolumeSpecName "kube-api-access-w9zpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:12.777033 master-1 kubenswrapper[4740]: I1014 13:38:12.776970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-d329-account-create-g8tsl" event={"ID":"cfc4b770-906b-416c-9f2b-a9a4bfbea3b4","Type":"ContainerDied","Data":"523e8cf19fcaecba70898d37db754070119b81f75fbbbb494e34eac9943bf213"} Oct 14 13:38:12.777033 master-1 kubenswrapper[4740]: I1014 13:38:12.777032 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="523e8cf19fcaecba70898d37db754070119b81f75fbbbb494e34eac9943bf213" Oct 14 13:38:12.777195 master-1 kubenswrapper[4740]: I1014 13:38:12.777097 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-d329-account-create-g8tsl" Oct 14 13:38:12.846923 master-1 kubenswrapper[4740]: I1014 13:38:12.846844 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9zpg\" (UniqueName: \"kubernetes.io/projected/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4-kube-api-access-w9zpg\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:15.622867 master-1 kubenswrapper[4740]: I1014 13:38:15.622731 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:15.622867 master-1 kubenswrapper[4740]: I1014 13:38:15.622807 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:15.656681 master-1 kubenswrapper[4740]: I1014 13:38:15.656620 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:15.663135 master-1 kubenswrapper[4740]: I1014 13:38:15.662756 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:15.809192 master-1 kubenswrapper[4740]: I1014 13:38:15.809114 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:15.809192 master-1 kubenswrapper[4740]: I1014 13:38:15.809178 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:17.733614 master-1 kubenswrapper[4740]: I1014 13:38:17.733531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:17.763821 master-1 kubenswrapper[4740]: I1014 13:38:17.763691 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-46645-default-external-api-0" Oct 14 13:38:19.943988 master-1 kubenswrapper[4740]: I1014 13:38:19.943900 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:38:19.945000 master-1 kubenswrapper[4740]: E1014 13:38:19.944211 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:38:22.080502 master-1 kubenswrapper[4740]: I1014 13:38:22.080439 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Oct 14 13:38:22.081210 master-1 kubenswrapper[4740]: E1014 13:38:22.081063 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc4b770-906b-416c-9f2b-a9a4bfbea3b4" containerName="mariadb-account-create" Oct 14 13:38:22.081210 master-1 kubenswrapper[4740]: I1014 13:38:22.081083 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc4b770-906b-416c-9f2b-a9a4bfbea3b4" containerName="mariadb-account-create" Oct 14 13:38:22.081505 master-1 kubenswrapper[4740]: I1014 13:38:22.081426 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc4b770-906b-416c-9f2b-a9a4bfbea3b4" containerName="mariadb-account-create" Oct 14 13:38:22.082433 master-1 kubenswrapper[4740]: I1014 13:38:22.082399 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.089499 master-1 kubenswrapper[4740]: I1014 13:38:22.089339 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Oct 14 13:38:22.101996 master-1 kubenswrapper[4740]: I1014 13:38:22.099071 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Oct 14 13:38:22.117541 master-1 kubenswrapper[4740]: I1014 13:38:22.117063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08db1e8c-2581-4212-b7ba-514ced29249f-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.117541 master-1 kubenswrapper[4740]: I1014 13:38:22.117216 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pvc\" (UniqueName: \"kubernetes.io/projected/08db1e8c-2581-4212-b7ba-514ced29249f-kube-api-access-n8pvc\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.117541 master-1 kubenswrapper[4740]: I1014 13:38:22.117284 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08db1e8c-2581-4212-b7ba-514ced29249f-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.219467 master-1 kubenswrapper[4740]: I1014 13:38:22.218805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08db1e8c-2581-4212-b7ba-514ced29249f-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.219467 master-1 kubenswrapper[4740]: I1014 13:38:22.218900 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pvc\" (UniqueName: \"kubernetes.io/projected/08db1e8c-2581-4212-b7ba-514ced29249f-kube-api-access-n8pvc\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.219467 master-1 kubenswrapper[4740]: I1014 13:38:22.218934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08db1e8c-2581-4212-b7ba-514ced29249f-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.222977 master-1 kubenswrapper[4740]: I1014 13:38:22.222914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08db1e8c-2581-4212-b7ba-514ced29249f-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.223479 master-1 kubenswrapper[4740]: I1014 13:38:22.223062 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08db1e8c-2581-4212-b7ba-514ced29249f-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.251040 master-1 kubenswrapper[4740]: I1014 13:38:22.250971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pvc\" (UniqueName: \"kubernetes.io/projected/08db1e8c-2581-4212-b7ba-514ced29249f-kube-api-access-n8pvc\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"08db1e8c-2581-4212-b7ba-514ced29249f\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.269793 master-1 kubenswrapper[4740]: I1014 13:38:22.269600 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:38:22.270786 master-1 kubenswrapper[4740]: I1014 13:38:22.270736 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.276248 master-1 kubenswrapper[4740]: I1014 13:38:22.275684 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 13:38:22.295407 master-1 kubenswrapper[4740]: I1014 13:38:22.295338 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:38:22.302967 master-1 kubenswrapper[4740]: I1014 13:38:22.302916 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:22.304350 master-1 kubenswrapper[4740]: I1014 13:38:22.304328 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:38:22.307543 master-1 kubenswrapper[4740]: I1014 13:38:22.307388 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:38:22.323119 master-1 kubenswrapper[4740]: I1014 13:38:22.322985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.323119 master-1 kubenswrapper[4740]: I1014 13:38:22.323055 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.323119 master-1 kubenswrapper[4740]: I1014 13:38:22.323111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.323436 master-1 kubenswrapper[4740]: I1014 13:38:22.323172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcpg\" (UniqueName: \"kubernetes.io/projected/686b8fd6-af27-4819-b5f5-6fdffec65a98-kube-api-access-5jcpg\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.323436 master-1 kubenswrapper[4740]: I1014 13:38:22.323263 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-config-data\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.323436 master-1 kubenswrapper[4740]: I1014 13:38:22.323401 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/686b8fd6-af27-4819-b5f5-6fdffec65a98-logs\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.323703 master-1 kubenswrapper[4740]: I1014 13:38:22.323523 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69b9n\" (UniqueName: \"kubernetes.io/projected/e497759a-6e7f-423b-b8f7-9f52606d2ec3-kube-api-access-69b9n\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.343193 master-1 kubenswrapper[4740]: I1014 13:38:22.343156 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:22.402976 master-1 kubenswrapper[4740]: I1014 13:38:22.402771 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:22.405151 master-1 kubenswrapper[4740]: I1014 13:38:22.405120 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:38:22.408970 master-1 kubenswrapper[4740]: I1014 13:38:22.408950 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 13:38:22.419124 master-1 kubenswrapper[4740]: I1014 13:38:22.419049 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:22.425454 master-1 kubenswrapper[4740]: I1014 13:38:22.425305 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.427492 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69b9n\" (UniqueName: \"kubernetes.io/projected/e497759a-6e7f-423b-b8f7-9f52606d2ec3-kube-api-access-69b9n\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.428822 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.428881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-config-data\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.428952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.428994 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.429035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.429089 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcpg\" (UniqueName: \"kubernetes.io/projected/686b8fd6-af27-4819-b5f5-6fdffec65a98-kube-api-access-5jcpg\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.429130 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-config-data\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.429220 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/686b8fd6-af27-4819-b5f5-6fdffec65a98-logs\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.431974 master-1 kubenswrapper[4740]: I1014 13:38:22.429270 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77qg\" (UniqueName: \"kubernetes.io/projected/f0f37760-d0d3-44d0-b4b2-88095f10222f-kube-api-access-j77qg\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.432912 master-1 kubenswrapper[4740]: I1014 13:38:22.432615 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/686b8fd6-af27-4819-b5f5-6fdffec65a98-logs\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.434327 master-1 kubenswrapper[4740]: I1014 13:38:22.434245 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.439760 master-1 kubenswrapper[4740]: I1014 13:38:22.439052 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.440466 master-1 kubenswrapper[4740]: I1014 13:38:22.440424 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.443106 master-1 kubenswrapper[4740]: I1014 13:38:22.443035 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-config-data\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.462571 master-1 kubenswrapper[4740]: I1014 13:38:22.462499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcpg\" (UniqueName: \"kubernetes.io/projected/686b8fd6-af27-4819-b5f5-6fdffec65a98-kube-api-access-5jcpg\") pod \"nova-api-2\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " pod="openstack/nova-api-2" Oct 14 13:38:22.463532 master-1 kubenswrapper[4740]: I1014 13:38:22.463480 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69b9n\" (UniqueName: \"kubernetes.io/projected/e497759a-6e7f-423b-b8f7-9f52606d2ec3-kube-api-access-69b9n\") pod \"nova-cell1-novncproxy-0\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.530673 master-1 kubenswrapper[4740]: I1014 13:38:22.529917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.530673 master-1 kubenswrapper[4740]: I1014 13:38:22.529988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-config-data\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.530673 master-1 kubenswrapper[4740]: I1014 13:38:22.530085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77qg\" (UniqueName: \"kubernetes.io/projected/f0f37760-d0d3-44d0-b4b2-88095f10222f-kube-api-access-j77qg\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.540700 master-1 kubenswrapper[4740]: I1014 13:38:22.534421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.542022 master-1 kubenswrapper[4740]: I1014 13:38:22.540840 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-config-data\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.552370 master-1 kubenswrapper[4740]: I1014 13:38:22.552191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77qg\" (UniqueName: \"kubernetes.io/projected/f0f37760-d0d3-44d0-b4b2-88095f10222f-kube-api-access-j77qg\") pod \"nova-scheduler-1\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:22.636685 master-1 kubenswrapper[4740]: I1014 13:38:22.636580 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:22.657795 master-1 kubenswrapper[4740]: I1014 13:38:22.657712 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:38:22.728415 master-1 kubenswrapper[4740]: I1014 13:38:22.728365 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:38:22.746157 master-1 kubenswrapper[4740]: I1014 13:38:22.746098 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:38:22.748168 master-1 kubenswrapper[4740]: I1014 13:38:22.748078 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:38:22.753218 master-1 kubenswrapper[4740]: I1014 13:38:22.753186 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:38:22.782886 master-1 kubenswrapper[4740]: I1014 13:38:22.782179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:38:22.838640 master-1 kubenswrapper[4740]: I1014 13:38:22.838512 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-config-data\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.838832 master-1 kubenswrapper[4740]: I1014 13:38:22.838769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0c6dc3-247f-42bf-bd48-265621b2c202-logs\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.839154 master-1 kubenswrapper[4740]: I1014 13:38:22.839087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.839347 master-1 kubenswrapper[4740]: I1014 13:38:22.839312 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r88j5\" (UniqueName: \"kubernetes.io/projected/1d0c6dc3-247f-42bf-bd48-265621b2c202-kube-api-access-r88j5\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.953292 master-1 kubenswrapper[4740]: I1014 13:38:22.951942 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r88j5\" (UniqueName: \"kubernetes.io/projected/1d0c6dc3-247f-42bf-bd48-265621b2c202-kube-api-access-r88j5\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.953292 master-1 kubenswrapper[4740]: I1014 13:38:22.952058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-config-data\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.953292 master-1 kubenswrapper[4740]: I1014 13:38:22.952136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0c6dc3-247f-42bf-bd48-265621b2c202-logs\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.953292 master-1 kubenswrapper[4740]: I1014 13:38:22.952255 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.955793 master-1 kubenswrapper[4740]: I1014 13:38:22.955753 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0c6dc3-247f-42bf-bd48-265621b2c202-logs\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.957414 master-1 kubenswrapper[4740]: I1014 13:38:22.956805 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.985574 master-1 kubenswrapper[4740]: I1014 13:38:22.985530 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-config-data\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:22.990436 master-1 kubenswrapper[4740]: I1014 13:38:22.990385 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r88j5\" (UniqueName: \"kubernetes.io/projected/1d0c6dc3-247f-42bf-bd48-265621b2c202-kube-api-access-r88j5\") pod \"nova-metadata-1\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " pod="openstack/nova-metadata-1" Oct 14 13:38:23.074587 master-1 kubenswrapper[4740]: I1014 13:38:23.074496 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:38:23.112523 master-1 kubenswrapper[4740]: I1014 13:38:23.112444 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Oct 14 13:38:23.117619 master-1 kubenswrapper[4740]: W1014 13:38:23.116756 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08db1e8c_2581_4212_b7ba_514ced29249f.slice/crio-d5e646894fc401839c6bce145dffff32c1c22df824f9d803f011b7218c298093 WatchSource:0}: Error finding container d5e646894fc401839c6bce145dffff32c1c22df824f9d803f011b7218c298093: Status 404 returned error can't find the container with id d5e646894fc401839c6bce145dffff32c1c22df824f9d803f011b7218c298093 Oct 14 13:38:23.673340 master-1 kubenswrapper[4740]: I1014 13:38:23.673279 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:38:23.683734 master-1 kubenswrapper[4740]: I1014 13:38:23.683642 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:23.684174 master-1 kubenswrapper[4740]: W1014 13:38:23.684129 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode497759a_6e7f_423b_b8f7_9f52606d2ec3.slice/crio-f979477cc889452c27f8fe562bf8cad5a5968eb5b435d290c9c7b9b07411c45d WatchSource:0}: Error finding container f979477cc889452c27f8fe562bf8cad5a5968eb5b435d290c9c7b9b07411c45d: Status 404 returned error can't find the container with id f979477cc889452c27f8fe562bf8cad5a5968eb5b435d290c9c7b9b07411c45d Oct 14 13:38:23.691713 master-1 kubenswrapper[4740]: I1014 13:38:23.691653 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:23.711277 master-1 kubenswrapper[4740]: W1014 13:38:23.710817 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod686b8fd6_af27_4819_b5f5_6fdffec65a98.slice/crio-2cf75e1a2b0904dca36dd591a8601139ad11d1b9c59d5cfc9af233f6cbebc20f WatchSource:0}: Error finding container 2cf75e1a2b0904dca36dd591a8601139ad11d1b9c59d5cfc9af233f6cbebc20f: Status 404 returned error can't find the container with id 2cf75e1a2b0904dca36dd591a8601139ad11d1b9c59d5cfc9af233f6cbebc20f Oct 14 13:38:23.895204 master-1 kubenswrapper[4740]: I1014 13:38:23.895145 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"08db1e8c-2581-4212-b7ba-514ced29249f","Type":"ContainerStarted","Data":"d5e646894fc401839c6bce145dffff32c1c22df824f9d803f011b7218c298093"} Oct 14 13:38:23.896441 master-1 kubenswrapper[4740]: I1014 13:38:23.896400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"f0f37760-d0d3-44d0-b4b2-88095f10222f","Type":"ContainerStarted","Data":"ba2444aaae52108b11402b60ca38d4d8d6c7173849af54861240ae200855a9e0"} Oct 14 13:38:23.897438 master-1 kubenswrapper[4740]: I1014 13:38:23.897399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"686b8fd6-af27-4819-b5f5-6fdffec65a98","Type":"ContainerStarted","Data":"2cf75e1a2b0904dca36dd591a8601139ad11d1b9c59d5cfc9af233f6cbebc20f"} Oct 14 13:38:23.898390 master-1 kubenswrapper[4740]: I1014 13:38:23.898357 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e497759a-6e7f-423b-b8f7-9f52606d2ec3","Type":"ContainerStarted","Data":"f979477cc889452c27f8fe562bf8cad5a5968eb5b435d290c9c7b9b07411c45d"} Oct 14 13:38:24.005391 master-1 kubenswrapper[4740]: I1014 13:38:24.005338 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:38:24.909873 master-1 kubenswrapper[4740]: I1014 13:38:24.909827 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1d0c6dc3-247f-42bf-bd48-265621b2c202","Type":"ContainerStarted","Data":"81c5921b264158e81e9e7eef723673ac6522b3b33877cb89230200e8788b1960"} Oct 14 13:38:25.879331 master-1 kubenswrapper[4740]: I1014 13:38:25.879207 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:38:29.549659 master-1 kubenswrapper[4740]: I1014 13:38:29.549563 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 14 13:38:29.552218 master-1 kubenswrapper[4740]: I1014 13:38:29.552176 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 13:38:29.556336 master-1 kubenswrapper[4740]: I1014 13:38:29.556097 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 14 13:38:29.556336 master-1 kubenswrapper[4740]: I1014 13:38:29.556157 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 14 13:38:29.579480 master-1 kubenswrapper[4740]: I1014 13:38:29.578971 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 14 13:38:29.725791 master-1 kubenswrapper[4740]: I1014 13:38:29.725735 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-config-data\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.726050 master-1 kubenswrapper[4740]: I1014 13:38:29.725986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.726158 master-1 kubenswrapper[4740]: I1014 13:38:29.726131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhn92\" (UniqueName: \"kubernetes.io/projected/fff845e7-62de-421e-80e6-e85408dc48be-kube-api-access-qhn92\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.726307 master-1 kubenswrapper[4740]: I1014 13:38:29.726291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-scripts\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.827673 master-1 kubenswrapper[4740]: I1014 13:38:29.827544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-config-data\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.827865 master-1 kubenswrapper[4740]: I1014 13:38:29.827704 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.827865 master-1 kubenswrapper[4740]: I1014 13:38:29.827765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhn92\" (UniqueName: \"kubernetes.io/projected/fff845e7-62de-421e-80e6-e85408dc48be-kube-api-access-qhn92\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.827960 master-1 kubenswrapper[4740]: I1014 13:38:29.827865 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-scripts\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.831709 master-1 kubenswrapper[4740]: I1014 13:38:29.831674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-scripts\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.832195 master-1 kubenswrapper[4740]: I1014 13:38:29.832093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-config-data\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.832730 master-1 kubenswrapper[4740]: I1014 13:38:29.832669 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-combined-ca-bundle\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.854870 master-1 kubenswrapper[4740]: I1014 13:38:29.854790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhn92\" (UniqueName: \"kubernetes.io/projected/fff845e7-62de-421e-80e6-e85408dc48be-kube-api-access-qhn92\") pod \"aodh-0\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " pod="openstack/aodh-0" Oct 14 13:38:29.922450 master-1 kubenswrapper[4740]: I1014 13:38:29.922367 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 13:38:31.944164 master-1 kubenswrapper[4740]: I1014 13:38:31.944120 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:38:31.944668 master-1 kubenswrapper[4740]: E1014 13:38:31.944433 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:38:35.642908 master-1 kubenswrapper[4740]: I1014 13:38:35.642720 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:35.707323 master-1 kubenswrapper[4740]: I1014 13:38:35.705483 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd59f759-c7xdl"] Oct 14 13:38:35.707323 master-1 kubenswrapper[4740]: I1014 13:38:35.707111 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:35.726029 master-1 kubenswrapper[4740]: I1014 13:38:35.725298 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd59f759-c7xdl"] Oct 14 13:38:35.908948 master-1 kubenswrapper[4740]: I1014 13:38:35.908584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:35.908948 master-1 kubenswrapper[4740]: I1014 13:38:35.908669 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:35.908948 master-1 kubenswrapper[4740]: I1014 13:38:35.908704 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kmj9\" (UniqueName: \"kubernetes.io/projected/c9e0f72f-8c88-4297-a690-dd519cb22ec5-kube-api-access-6kmj9\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:35.908948 master-1 kubenswrapper[4740]: I1014 13:38:35.908752 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-svc\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:35.909403 master-1 kubenswrapper[4740]: I1014 13:38:35.909243 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:35.909508 master-1 kubenswrapper[4740]: I1014 13:38:35.909448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-config\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.012256 master-1 kubenswrapper[4740]: I1014 13:38:36.012182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.014888 master-1 kubenswrapper[4740]: I1014 13:38:36.012892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-config\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.014888 master-1 kubenswrapper[4740]: I1014 13:38:36.013075 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.014888 master-1 kubenswrapper[4740]: I1014 13:38:36.013116 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.014888 master-1 kubenswrapper[4740]: I1014 13:38:36.013140 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kmj9\" (UniqueName: \"kubernetes.io/projected/c9e0f72f-8c88-4297-a690-dd519cb22ec5-kube-api-access-6kmj9\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.014888 master-1 kubenswrapper[4740]: I1014 13:38:36.013183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-svc\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.014888 master-1 kubenswrapper[4740]: I1014 13:38:36.014477 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.015373 master-1 kubenswrapper[4740]: I1014 13:38:36.015284 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.021315 master-1 kubenswrapper[4740]: I1014 13:38:36.015451 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-config\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.021315 master-1 kubenswrapper[4740]: I1014 13:38:36.016090 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-svc\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.037398 master-1 kubenswrapper[4740]: I1014 13:38:36.015553 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.041524 master-1 kubenswrapper[4740]: I1014 13:38:36.041500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kmj9\" (UniqueName: \"kubernetes.io/projected/c9e0f72f-8c88-4297-a690-dd519cb22ec5-kube-api-access-6kmj9\") pod \"dnsmasq-dns-6cd59f759-c7xdl\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:36.053829 master-1 kubenswrapper[4740]: I1014 13:38:36.053777 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:38.702919 master-1 kubenswrapper[4740]: I1014 13:38:38.702768 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 14 13:38:39.924181 master-1 kubenswrapper[4740]: I1014 13:38:39.924130 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:38:39.925038 master-1 kubenswrapper[4740]: I1014 13:38:39.924951 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="5753ddc2-c44f-411a-a53a-ad0d1a38efed" containerName="kube-state-metrics" containerID="cri-o://f09c3093152b6c7dc691029f6591431e0adb91c6b489af94a85d72caa465eccb" gracePeriod=30 Oct 14 13:38:40.171446 master-1 kubenswrapper[4740]: I1014 13:38:40.171374 4740 generic.go:334] "Generic (PLEG): container finished" podID="5753ddc2-c44f-411a-a53a-ad0d1a38efed" containerID="f09c3093152b6c7dc691029f6591431e0adb91c6b489af94a85d72caa465eccb" exitCode=2 Oct 14 13:38:40.171446 master-1 kubenswrapper[4740]: I1014 13:38:40.171439 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5753ddc2-c44f-411a-a53a-ad0d1a38efed","Type":"ContainerDied","Data":"f09c3093152b6c7dc691029f6591431e0adb91c6b489af94a85d72caa465eccb"} Oct 14 13:38:42.673599 master-1 kubenswrapper[4740]: I1014 13:38:42.671646 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:38:42.829628 master-1 kubenswrapper[4740]: I1014 13:38:42.829565 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctdb\" (UniqueName: \"kubernetes.io/projected/5753ddc2-c44f-411a-a53a-ad0d1a38efed-kube-api-access-2ctdb\") pod \"5753ddc2-c44f-411a-a53a-ad0d1a38efed\" (UID: \"5753ddc2-c44f-411a-a53a-ad0d1a38efed\") " Oct 14 13:38:42.841343 master-1 kubenswrapper[4740]: I1014 13:38:42.837478 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5753ddc2-c44f-411a-a53a-ad0d1a38efed-kube-api-access-2ctdb" (OuterVolumeSpecName: "kube-api-access-2ctdb") pod "5753ddc2-c44f-411a-a53a-ad0d1a38efed" (UID: "5753ddc2-c44f-411a-a53a-ad0d1a38efed"). InnerVolumeSpecName "kube-api-access-2ctdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:42.918222 master-1 kubenswrapper[4740]: I1014 13:38:42.918175 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd59f759-c7xdl"] Oct 14 13:38:42.937256 master-1 kubenswrapper[4740]: I1014 13:38:42.933869 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ctdb\" (UniqueName: \"kubernetes.io/projected/5753ddc2-c44f-411a-a53a-ad0d1a38efed-kube-api-access-2ctdb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:42.987319 master-1 kubenswrapper[4740]: I1014 13:38:42.987268 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 14 13:38:43.242141 master-1 kubenswrapper[4740]: I1014 13:38:43.242084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerStarted","Data":"68bc81b87d27f2b06c0275e58d0e7d3ea07f9e2bc74633a86445fbf9063ca145"} Oct 14 13:38:43.247660 master-1 kubenswrapper[4740]: I1014 13:38:43.247634 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"f0f37760-d0d3-44d0-b4b2-88095f10222f","Type":"ContainerStarted","Data":"dabf36a612e5327ab76517bfa400b882cf8ae5162383fc394bfecd3a1bd05c57"} Oct 14 13:38:43.254337 master-1 kubenswrapper[4740]: I1014 13:38:43.253917 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1d0c6dc3-247f-42bf-bd48-265621b2c202","Type":"ContainerStarted","Data":"8dd96197bc75e254b98fcd8d332a2bca0a60437b93e392c3892305ff01c6c560"} Oct 14 13:38:43.254337 master-1 kubenswrapper[4740]: I1014 13:38:43.253958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1d0c6dc3-247f-42bf-bd48-265621b2c202","Type":"ContainerStarted","Data":"bb6216313388627f07cc5f9d7f3fc804b44df3998a68a98115ab7d89403eecc4"} Oct 14 13:38:43.256141 master-1 kubenswrapper[4740]: I1014 13:38:43.255461 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" event={"ID":"c9e0f72f-8c88-4297-a690-dd519cb22ec5","Type":"ContainerStarted","Data":"288dc818a52e34d75efc12a8c987dc4405e8e0836ba95f6e0b8b9250ca47d3f4"} Oct 14 13:38:43.261915 master-1 kubenswrapper[4740]: I1014 13:38:43.261487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"686b8fd6-af27-4819-b5f5-6fdffec65a98","Type":"ContainerStarted","Data":"7db78e79948eef202e82d618344b8766cca0d985ab5610a03e68edf1ca884398"} Oct 14 13:38:43.261915 master-1 kubenswrapper[4740]: I1014 13:38:43.261635 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"686b8fd6-af27-4819-b5f5-6fdffec65a98","Type":"ContainerStarted","Data":"4b5ef1ac8039675953f0f8a1f8700870dc204cd44e9c766e0d0bc0c36989ca99"} Oct 14 13:38:43.261915 master-1 kubenswrapper[4740]: I1014 13:38:43.261776 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-log" containerID="cri-o://4b5ef1ac8039675953f0f8a1f8700870dc204cd44e9c766e0d0bc0c36989ca99" gracePeriod=30 Oct 14 13:38:43.262059 master-1 kubenswrapper[4740]: I1014 13:38:43.261869 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-api" containerID="cri-o://7db78e79948eef202e82d618344b8766cca0d985ab5610a03e68edf1ca884398" gracePeriod=30 Oct 14 13:38:43.272491 master-1 kubenswrapper[4740]: I1014 13:38:43.272399 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5753ddc2-c44f-411a-a53a-ad0d1a38efed","Type":"ContainerDied","Data":"19bd023cc3e7d3f13382a1c6ff72c76d83c7272da7550367e00f0cf0cf6edf69"} Oct 14 13:38:43.272491 master-1 kubenswrapper[4740]: I1014 13:38:43.272457 4740 scope.go:117] "RemoveContainer" containerID="f09c3093152b6c7dc691029f6591431e0adb91c6b489af94a85d72caa465eccb" Oct 14 13:38:43.272743 master-1 kubenswrapper[4740]: I1014 13:38:43.272576 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:38:43.290856 master-1 kubenswrapper[4740]: I1014 13:38:43.280576 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-1" podStartSLOduration=2.980993755 podStartE2EDuration="21.280552239s" podCreationTimestamp="2025-10-14 13:38:22 +0000 UTC" firstStartedPulling="2025-10-14 13:38:23.693015883 +0000 UTC m=+1929.503305212" lastFinishedPulling="2025-10-14 13:38:41.992574377 +0000 UTC m=+1947.802863696" observedRunningTime="2025-10-14 13:38:43.278845414 +0000 UTC m=+1949.089134743" watchObservedRunningTime="2025-10-14 13:38:43.280552239 +0000 UTC m=+1949.090841578" Oct 14 13:38:43.290856 master-1 kubenswrapper[4740]: I1014 13:38:43.288126 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e497759a-6e7f-423b-b8f7-9f52606d2ec3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aa4018bca6359ae54f263cbc8c8b2561130f68dd4d8f59a779befef271e6d2bc" gracePeriod=30 Oct 14 13:38:43.290856 master-1 kubenswrapper[4740]: I1014 13:38:43.288333 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e497759a-6e7f-423b-b8f7-9f52606d2ec3","Type":"ContainerStarted","Data":"aa4018bca6359ae54f263cbc8c8b2561130f68dd4d8f59a779befef271e6d2bc"} Oct 14 13:38:43.295590 master-1 kubenswrapper[4740]: I1014 13:38:43.294772 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"08db1e8c-2581-4212-b7ba-514ced29249f","Type":"ContainerStarted","Data":"ccd3f57ac94e42bbf9f6a28f7e8686fe961eaca4d1214b3c2db570abdeafc247"} Oct 14 13:38:43.296854 master-1 kubenswrapper[4740]: I1014 13:38:43.295899 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:43.340377 master-1 kubenswrapper[4740]: I1014 13:38:43.339625 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-1" podStartSLOduration=3.335859772 podStartE2EDuration="21.339604413s" podCreationTimestamp="2025-10-14 13:38:22 +0000 UTC" firstStartedPulling="2025-10-14 13:38:24.021150712 +0000 UTC m=+1929.831440041" lastFinishedPulling="2025-10-14 13:38:42.024895353 +0000 UTC m=+1947.835184682" observedRunningTime="2025-10-14 13:38:43.334360734 +0000 UTC m=+1949.144650083" watchObservedRunningTime="2025-10-14 13:38:43.339604413 +0000 UTC m=+1949.149893742" Oct 14 13:38:43.343347 master-1 kubenswrapper[4740]: I1014 13:38:43.342312 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Oct 14 13:38:43.395437 master-1 kubenswrapper[4740]: I1014 13:38:43.395249 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=3.120123596 podStartE2EDuration="21.395195036s" podCreationTimestamp="2025-10-14 13:38:22 +0000 UTC" firstStartedPulling="2025-10-14 13:38:23.717346503 +0000 UTC m=+1929.527635822" lastFinishedPulling="2025-10-14 13:38:41.992417933 +0000 UTC m=+1947.802707262" observedRunningTime="2025-10-14 13:38:43.381488783 +0000 UTC m=+1949.191778122" watchObservedRunningTime="2025-10-14 13:38:43.395195036 +0000 UTC m=+1949.205484365" Oct 14 13:38:43.413599 master-1 kubenswrapper[4740]: I1014 13:38:43.413547 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:38:43.441475 master-1 kubenswrapper[4740]: I1014 13:38:43.438614 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:38:43.441475 master-1 kubenswrapper[4740]: I1014 13:38:43.440815 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.121484609 podStartE2EDuration="21.440766043s" podCreationTimestamp="2025-10-14 13:38:22 +0000 UTC" firstStartedPulling="2025-10-14 13:38:23.688061233 +0000 UTC m=+1929.498350562" lastFinishedPulling="2025-10-14 13:38:42.007342667 +0000 UTC m=+1947.817631996" observedRunningTime="2025-10-14 13:38:43.433730717 +0000 UTC m=+1949.244020046" watchObservedRunningTime="2025-10-14 13:38:43.440766043 +0000 UTC m=+1949.251055382" Oct 14 13:38:43.476684 master-1 kubenswrapper[4740]: I1014 13:38:43.476372 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.571555651 podStartE2EDuration="21.476346536s" podCreationTimestamp="2025-10-14 13:38:22 +0000 UTC" firstStartedPulling="2025-10-14 13:38:23.120216681 +0000 UTC m=+1928.930506030" lastFinishedPulling="2025-10-14 13:38:42.025007586 +0000 UTC m=+1947.835296915" observedRunningTime="2025-10-14 13:38:43.471485907 +0000 UTC m=+1949.281775246" watchObservedRunningTime="2025-10-14 13:38:43.476346536 +0000 UTC m=+1949.286635865" Oct 14 13:38:43.944670 master-1 kubenswrapper[4740]: I1014 13:38:43.944612 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:38:43.944670 master-1 kubenswrapper[4740]: E1014 13:38:43.944857 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:38:44.309835 master-1 kubenswrapper[4740]: I1014 13:38:44.307839 4740 generic.go:334] "Generic (PLEG): container finished" podID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerID="461e0146f1ea40b9a3f5f4aef2fea3cfa251134721cf2c31a7102aa2b4eafb4a" exitCode=0 Oct 14 13:38:44.309835 master-1 kubenswrapper[4740]: I1014 13:38:44.307900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" event={"ID":"c9e0f72f-8c88-4297-a690-dd519cb22ec5","Type":"ContainerDied","Data":"461e0146f1ea40b9a3f5f4aef2fea3cfa251134721cf2c31a7102aa2b4eafb4a"} Oct 14 13:38:44.313554 master-1 kubenswrapper[4740]: I1014 13:38:44.313257 4740 generic.go:334] "Generic (PLEG): container finished" podID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerID="7db78e79948eef202e82d618344b8766cca0d985ab5610a03e68edf1ca884398" exitCode=0 Oct 14 13:38:44.313554 master-1 kubenswrapper[4740]: I1014 13:38:44.313297 4740 generic.go:334] "Generic (PLEG): container finished" podID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerID="4b5ef1ac8039675953f0f8a1f8700870dc204cd44e9c766e0d0bc0c36989ca99" exitCode=143 Oct 14 13:38:44.314269 master-1 kubenswrapper[4740]: I1014 13:38:44.314241 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"686b8fd6-af27-4819-b5f5-6fdffec65a98","Type":"ContainerDied","Data":"7db78e79948eef202e82d618344b8766cca0d985ab5610a03e68edf1ca884398"} Oct 14 13:38:44.314335 master-1 kubenswrapper[4740]: I1014 13:38:44.314275 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"686b8fd6-af27-4819-b5f5-6fdffec65a98","Type":"ContainerDied","Data":"4b5ef1ac8039675953f0f8a1f8700870dc204cd44e9c766e0d0bc0c36989ca99"} Oct 14 13:38:44.512455 master-1 kubenswrapper[4740]: I1014 13:38:44.512372 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:38:44.695049 master-1 kubenswrapper[4740]: I1014 13:38:44.694989 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jcpg\" (UniqueName: \"kubernetes.io/projected/686b8fd6-af27-4819-b5f5-6fdffec65a98-kube-api-access-5jcpg\") pod \"686b8fd6-af27-4819-b5f5-6fdffec65a98\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " Oct 14 13:38:44.695420 master-1 kubenswrapper[4740]: I1014 13:38:44.695400 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/686b8fd6-af27-4819-b5f5-6fdffec65a98-logs\") pod \"686b8fd6-af27-4819-b5f5-6fdffec65a98\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " Oct 14 13:38:44.695509 master-1 kubenswrapper[4740]: I1014 13:38:44.695492 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-config-data\") pod \"686b8fd6-af27-4819-b5f5-6fdffec65a98\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " Oct 14 13:38:44.695555 master-1 kubenswrapper[4740]: I1014 13:38:44.695528 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-combined-ca-bundle\") pod \"686b8fd6-af27-4819-b5f5-6fdffec65a98\" (UID: \"686b8fd6-af27-4819-b5f5-6fdffec65a98\") " Oct 14 13:38:44.695915 master-1 kubenswrapper[4740]: I1014 13:38:44.695857 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/686b8fd6-af27-4819-b5f5-6fdffec65a98-logs" (OuterVolumeSpecName: "logs") pod "686b8fd6-af27-4819-b5f5-6fdffec65a98" (UID: "686b8fd6-af27-4819-b5f5-6fdffec65a98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:44.698441 master-1 kubenswrapper[4740]: I1014 13:38:44.698379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686b8fd6-af27-4819-b5f5-6fdffec65a98-kube-api-access-5jcpg" (OuterVolumeSpecName: "kube-api-access-5jcpg") pod "686b8fd6-af27-4819-b5f5-6fdffec65a98" (UID: "686b8fd6-af27-4819-b5f5-6fdffec65a98"). InnerVolumeSpecName "kube-api-access-5jcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:44.724321 master-1 kubenswrapper[4740]: I1014 13:38:44.724195 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "686b8fd6-af27-4819-b5f5-6fdffec65a98" (UID: "686b8fd6-af27-4819-b5f5-6fdffec65a98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:44.741045 master-1 kubenswrapper[4740]: I1014 13:38:44.740953 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-config-data" (OuterVolumeSpecName: "config-data") pod "686b8fd6-af27-4819-b5f5-6fdffec65a98" (UID: "686b8fd6-af27-4819-b5f5-6fdffec65a98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:44.799286 master-1 kubenswrapper[4740]: I1014 13:38:44.798408 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/686b8fd6-af27-4819-b5f5-6fdffec65a98-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:44.799286 master-1 kubenswrapper[4740]: I1014 13:38:44.798457 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:44.799286 master-1 kubenswrapper[4740]: I1014 13:38:44.798473 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686b8fd6-af27-4819-b5f5-6fdffec65a98-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:44.799286 master-1 kubenswrapper[4740]: I1014 13:38:44.798488 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jcpg\" (UniqueName: \"kubernetes.io/projected/686b8fd6-af27-4819-b5f5-6fdffec65a98-kube-api-access-5jcpg\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:44.961182 master-1 kubenswrapper[4740]: I1014 13:38:44.961036 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5753ddc2-c44f-411a-a53a-ad0d1a38efed" path="/var/lib/kubelet/pods/5753ddc2-c44f-411a-a53a-ad0d1a38efed/volumes" Oct 14 13:38:45.325272 master-1 kubenswrapper[4740]: I1014 13:38:45.325150 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" event={"ID":"c9e0f72f-8c88-4297-a690-dd519cb22ec5","Type":"ContainerStarted","Data":"9aa3890420154879771cf70d24c10f047f249594821c63f9104e47beefee1c06"} Oct 14 13:38:45.325674 master-1 kubenswrapper[4740]: I1014 13:38:45.325297 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:45.327456 master-1 kubenswrapper[4740]: I1014 13:38:45.327397 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"686b8fd6-af27-4819-b5f5-6fdffec65a98","Type":"ContainerDied","Data":"2cf75e1a2b0904dca36dd591a8601139ad11d1b9c59d5cfc9af233f6cbebc20f"} Oct 14 13:38:45.327591 master-1 kubenswrapper[4740]: I1014 13:38:45.327468 4740 scope.go:117] "RemoveContainer" containerID="7db78e79948eef202e82d618344b8766cca0d985ab5610a03e68edf1ca884398" Oct 14 13:38:45.327651 master-1 kubenswrapper[4740]: I1014 13:38:45.327605 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:38:45.365544 master-1 kubenswrapper[4740]: I1014 13:38:45.365411 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" podStartSLOduration=10.36537656 podStartE2EDuration="10.36537656s" podCreationTimestamp="2025-10-14 13:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:38:45.356153526 +0000 UTC m=+1951.166442895" watchObservedRunningTime="2025-10-14 13:38:45.36537656 +0000 UTC m=+1951.175665909" Oct 14 13:38:45.405926 master-1 kubenswrapper[4740]: I1014 13:38:45.405749 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:45.431269 master-1 kubenswrapper[4740]: I1014 13:38:45.420343 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447156 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: E1014 13:38:45.447590 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-api" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447609 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-api" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: E1014 13:38:45.447643 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-log" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447653 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-log" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: E1014 13:38:45.447673 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5753ddc2-c44f-411a-a53a-ad0d1a38efed" containerName="kube-state-metrics" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447683 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5753ddc2-c44f-411a-a53a-ad0d1a38efed" containerName="kube-state-metrics" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447866 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-log" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447887 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" containerName="nova-api-api" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.447913 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5753ddc2-c44f-411a-a53a-ad0d1a38efed" containerName="kube-state-metrics" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.449455 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:38:45.467271 master-1 kubenswrapper[4740]: I1014 13:38:45.454112 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:38:45.475280 master-1 kubenswrapper[4740]: I1014 13:38:45.468314 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:45.616263 master-1 kubenswrapper[4740]: I1014 13:38:45.616056 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtmq\" (UniqueName: \"kubernetes.io/projected/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-kube-api-access-xvtmq\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.616263 master-1 kubenswrapper[4740]: I1014 13:38:45.616164 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-config-data\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.617405 master-1 kubenswrapper[4740]: I1014 13:38:45.617352 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.617675 master-1 kubenswrapper[4740]: I1014 13:38:45.617650 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-logs\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.720465 master-1 kubenswrapper[4740]: I1014 13:38:45.719699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-logs\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.720465 master-1 kubenswrapper[4740]: I1014 13:38:45.719846 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtmq\" (UniqueName: \"kubernetes.io/projected/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-kube-api-access-xvtmq\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.720465 master-1 kubenswrapper[4740]: I1014 13:38:45.719910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-config-data\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.720465 master-1 kubenswrapper[4740]: I1014 13:38:45.719932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.720465 master-1 kubenswrapper[4740]: I1014 13:38:45.720243 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-logs\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.724289 master-1 kubenswrapper[4740]: I1014 13:38:45.724250 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-config-data\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.724843 master-1 kubenswrapper[4740]: I1014 13:38:45.724804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.765148 master-1 kubenswrapper[4740]: I1014 13:38:45.765093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtmq\" (UniqueName: \"kubernetes.io/projected/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-kube-api-access-xvtmq\") pod \"nova-api-2\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " pod="openstack/nova-api-2" Oct 14 13:38:45.796305 master-1 kubenswrapper[4740]: I1014 13:38:45.795316 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:38:46.954352 master-1 kubenswrapper[4740]: I1014 13:38:46.954295 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686b8fd6-af27-4819-b5f5-6fdffec65a98" path="/var/lib/kubelet/pods/686b8fd6-af27-4819-b5f5-6fdffec65a98/volumes" Oct 14 13:38:47.637613 master-1 kubenswrapper[4740]: I1014 13:38:47.637528 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:38:47.728993 master-1 kubenswrapper[4740]: I1014 13:38:47.728935 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-1" Oct 14 13:38:47.826757 master-1 kubenswrapper[4740]: I1014 13:38:47.825372 4740 scope.go:117] "RemoveContainer" containerID="4b5ef1ac8039675953f0f8a1f8700870dc204cd44e9c766e0d0bc0c36989ca99" Oct 14 13:38:48.074661 master-1 kubenswrapper[4740]: I1014 13:38:48.074595 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 14 13:38:48.075137 master-1 kubenswrapper[4740]: I1014 13:38:48.074674 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 14 13:38:48.312633 master-1 kubenswrapper[4740]: I1014 13:38:48.312524 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:38:48.360747 master-1 kubenswrapper[4740]: I1014 13:38:48.360658 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a","Type":"ContainerStarted","Data":"ea8b56b14c43c75c14b8b7c85c0c575c9a23c67743c929f6babe38b2019cee55"} Oct 14 13:38:48.364343 master-1 kubenswrapper[4740]: I1014 13:38:48.364262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerStarted","Data":"5bac30b0bf2098c76e92a87b1a38be9c96fdd5009184f6401cb841effecb9d35"} Oct 14 13:38:49.383753 master-1 kubenswrapper[4740]: I1014 13:38:49.383700 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a","Type":"ContainerStarted","Data":"7bb2883e4220cfe413d82f380b03780663e46f55d51c617e8eaf7143cfd7e258"} Oct 14 13:38:49.384522 master-1 kubenswrapper[4740]: I1014 13:38:49.384499 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:49.384647 master-1 kubenswrapper[4740]: I1014 13:38:49.384629 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a","Type":"ContainerStarted","Data":"bc2f610a777baef297a925b65687d8569ac1963fffb5986993ce2fdd4d44bc07"} Oct 14 13:38:49.385051 master-1 kubenswrapper[4740]: I1014 13:38:49.384980 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-1" podUID="f0f37760-d0d3-44d0-b4b2-88095f10222f" containerName="nova-scheduler-scheduler" containerID="cri-o://dabf36a612e5327ab76517bfa400b882cf8ae5162383fc394bfecd3a1bd05c57" gracePeriod=30 Oct 14 13:38:49.424478 master-1 kubenswrapper[4740]: I1014 13:38:49.424399 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=4.424381963 podStartE2EDuration="4.424381963s" podCreationTimestamp="2025-10-14 13:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:38:49.418098836 +0000 UTC m=+1955.228388185" watchObservedRunningTime="2025-10-14 13:38:49.424381963 +0000 UTC m=+1955.234671292" Oct 14 13:38:50.419530 master-1 kubenswrapper[4740]: I1014 13:38:50.419245 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerStarted","Data":"0d8c04eeb24406133e1f3253a384b5a4b674f78fe6ddf5492016e2eb638c6b46"} Oct 14 13:38:51.055602 master-1 kubenswrapper[4740]: I1014 13:38:51.055518 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:38:51.179342 master-1 kubenswrapper[4740]: I1014 13:38:51.179278 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8b568997-972jn"] Oct 14 13:38:51.180626 master-1 kubenswrapper[4740]: I1014 13:38:51.179611 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f8b568997-972jn" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerName="dnsmasq-dns" containerID="cri-o://3a1f848ad17cf9bd1575faaa3b50e700f50c0398c2977c590e297d1c7978a8c7" gracePeriod=10 Oct 14 13:38:51.432855 master-1 kubenswrapper[4740]: I1014 13:38:51.430245 4740 generic.go:334] "Generic (PLEG): container finished" podID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerID="3a1f848ad17cf9bd1575faaa3b50e700f50c0398c2977c590e297d1c7978a8c7" exitCode=0 Oct 14 13:38:51.432855 master-1 kubenswrapper[4740]: I1014 13:38:51.430315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8b568997-972jn" event={"ID":"c40f97f4-5012-4f9c-bb3b-5bb53d3544be","Type":"ContainerDied","Data":"3a1f848ad17cf9bd1575faaa3b50e700f50c0398c2977c590e297d1c7978a8c7"} Oct 14 13:38:52.347464 master-1 kubenswrapper[4740]: I1014 13:38:52.347205 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:38:52.446647 master-1 kubenswrapper[4740]: I1014 13:38:52.446586 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8b568997-972jn" event={"ID":"c40f97f4-5012-4f9c-bb3b-5bb53d3544be","Type":"ContainerDied","Data":"9e866b457ee329dfd5d0b2f39e75e97a6c3b90210ba2c5975d723768b51a288b"} Oct 14 13:38:52.447195 master-1 kubenswrapper[4740]: I1014 13:38:52.446667 4740 scope.go:117] "RemoveContainer" containerID="3a1f848ad17cf9bd1575faaa3b50e700f50c0398c2977c590e297d1c7978a8c7" Oct 14 13:38:52.447195 master-1 kubenswrapper[4740]: I1014 13:38:52.446850 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8b568997-972jn" Oct 14 13:38:52.454059 master-1 kubenswrapper[4740]: I1014 13:38:52.453952 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerStarted","Data":"cef992ea89ebf50dfda1bc86a49651108b422ad03faad05e23bbfc6a2cb69887"} Oct 14 13:38:52.478941 master-1 kubenswrapper[4740]: I1014 13:38:52.478898 4740 scope.go:117] "RemoveContainer" containerID="53da13037470673ca6135247826d3dac951c542ca939362e73081083c420aaa2" Oct 14 13:38:52.488927 master-1 kubenswrapper[4740]: I1014 13:38:52.488873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvrv5\" (UniqueName: \"kubernetes.io/projected/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-kube-api-access-rvrv5\") pod \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " Oct 14 13:38:52.489096 master-1 kubenswrapper[4740]: I1014 13:38:52.488970 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-config\") pod \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " Oct 14 13:38:52.489096 master-1 kubenswrapper[4740]: I1014 13:38:52.489085 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-nb\") pod \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " Oct 14 13:38:52.489174 master-1 kubenswrapper[4740]: I1014 13:38:52.489164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-sb\") pod \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " Oct 14 13:38:52.490196 master-1 kubenswrapper[4740]: I1014 13:38:52.489651 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-swift-storage-0\") pod \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " Oct 14 13:38:52.490196 master-1 kubenswrapper[4740]: I1014 13:38:52.489737 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-svc\") pod \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\" (UID: \"c40f97f4-5012-4f9c-bb3b-5bb53d3544be\") " Oct 14 13:38:52.493758 master-1 kubenswrapper[4740]: I1014 13:38:52.493710 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-kube-api-access-rvrv5" (OuterVolumeSpecName: "kube-api-access-rvrv5") pod "c40f97f4-5012-4f9c-bb3b-5bb53d3544be" (UID: "c40f97f4-5012-4f9c-bb3b-5bb53d3544be"). InnerVolumeSpecName "kube-api-access-rvrv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:52.537880 master-1 kubenswrapper[4740]: I1014 13:38:52.537794 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c40f97f4-5012-4f9c-bb3b-5bb53d3544be" (UID: "c40f97f4-5012-4f9c-bb3b-5bb53d3544be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:38:52.539676 master-1 kubenswrapper[4740]: I1014 13:38:52.539610 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c40f97f4-5012-4f9c-bb3b-5bb53d3544be" (UID: "c40f97f4-5012-4f9c-bb3b-5bb53d3544be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:38:52.539748 master-1 kubenswrapper[4740]: I1014 13:38:52.539669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c40f97f4-5012-4f9c-bb3b-5bb53d3544be" (UID: "c40f97f4-5012-4f9c-bb3b-5bb53d3544be"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:38:52.546859 master-1 kubenswrapper[4740]: I1014 13:38:52.546754 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c40f97f4-5012-4f9c-bb3b-5bb53d3544be" (UID: "c40f97f4-5012-4f9c-bb3b-5bb53d3544be"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:38:52.564223 master-1 kubenswrapper[4740]: I1014 13:38:52.564149 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-config" (OuterVolumeSpecName: "config") pod "c40f97f4-5012-4f9c-bb3b-5bb53d3544be" (UID: "c40f97f4-5012-4f9c-bb3b-5bb53d3544be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:38:52.593079 master-1 kubenswrapper[4740]: I1014 13:38:52.593009 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvrv5\" (UniqueName: \"kubernetes.io/projected/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-kube-api-access-rvrv5\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:52.593079 master-1 kubenswrapper[4740]: I1014 13:38:52.593063 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:52.593079 master-1 kubenswrapper[4740]: I1014 13:38:52.593076 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:52.593079 master-1 kubenswrapper[4740]: I1014 13:38:52.593088 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:52.593548 master-1 kubenswrapper[4740]: I1014 13:38:52.593099 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:52.593548 master-1 kubenswrapper[4740]: I1014 13:38:52.593110 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c40f97f4-5012-4f9c-bb3b-5bb53d3544be-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:52.797372 master-1 kubenswrapper[4740]: I1014 13:38:52.796495 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8b568997-972jn"] Oct 14 13:38:52.803190 master-1 kubenswrapper[4740]: I1014 13:38:52.803125 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8b568997-972jn"] Oct 14 13:38:52.975740 master-1 kubenswrapper[4740]: I1014 13:38:52.975585 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" path="/var/lib/kubelet/pods/c40f97f4-5012-4f9c-bb3b-5bb53d3544be/volumes" Oct 14 13:38:53.074944 master-1 kubenswrapper[4740]: I1014 13:38:53.074877 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 14 13:38:53.075337 master-1 kubenswrapper[4740]: I1014 13:38:53.075288 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 14 13:38:54.168793 master-1 kubenswrapper[4740]: I1014 13:38:54.167976 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.128.0.175:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:38:54.168793 master-1 kubenswrapper[4740]: I1014 13:38:54.167978 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.128.0.175:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:38:54.483372 master-1 kubenswrapper[4740]: I1014 13:38:54.483196 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerStarted","Data":"b9597e69ec4eb1049fc8bd28291d9c93eb820e02dd2554931f689c03f99af018"} Oct 14 13:38:54.483725 master-1 kubenswrapper[4740]: I1014 13:38:54.483687 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-listener" containerID="cri-o://b9597e69ec4eb1049fc8bd28291d9c93eb820e02dd2554931f689c03f99af018" gracePeriod=30 Oct 14 13:38:54.483859 master-1 kubenswrapper[4740]: I1014 13:38:54.483733 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-notifier" containerID="cri-o://cef992ea89ebf50dfda1bc86a49651108b422ad03faad05e23bbfc6a2cb69887" gracePeriod=30 Oct 14 13:38:54.483943 master-1 kubenswrapper[4740]: I1014 13:38:54.483836 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-evaluator" containerID="cri-o://0d8c04eeb24406133e1f3253a384b5a4b674f78fe6ddf5492016e2eb638c6b46" gracePeriod=30 Oct 14 13:38:54.484064 master-1 kubenswrapper[4740]: I1014 13:38:54.484040 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-api" containerID="cri-o://5bac30b0bf2098c76e92a87b1a38be9c96fdd5009184f6401cb841effecb9d35" gracePeriod=30 Oct 14 13:38:54.488555 master-1 kubenswrapper[4740]: I1014 13:38:54.488465 4740 generic.go:334] "Generic (PLEG): container finished" podID="f0f37760-d0d3-44d0-b4b2-88095f10222f" containerID="dabf36a612e5327ab76517bfa400b882cf8ae5162383fc394bfecd3a1bd05c57" exitCode=0 Oct 14 13:38:54.488701 master-1 kubenswrapper[4740]: I1014 13:38:54.488549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"f0f37760-d0d3-44d0-b4b2-88095f10222f","Type":"ContainerDied","Data":"dabf36a612e5327ab76517bfa400b882cf8ae5162383fc394bfecd3a1bd05c57"} Oct 14 13:38:54.522896 master-1 kubenswrapper[4740]: I1014 13:38:54.522817 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=14.520588557 podStartE2EDuration="25.52280236s" podCreationTimestamp="2025-10-14 13:38:29 +0000 UTC" firstStartedPulling="2025-10-14 13:38:42.967508585 +0000 UTC m=+1948.777797914" lastFinishedPulling="2025-10-14 13:38:53.969722388 +0000 UTC m=+1959.780011717" observedRunningTime="2025-10-14 13:38:54.522664947 +0000 UTC m=+1960.332954286" watchObservedRunningTime="2025-10-14 13:38:54.52280236 +0000 UTC m=+1960.333091689" Oct 14 13:38:54.781473 master-1 kubenswrapper[4740]: I1014 13:38:54.781408 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:38:54.855366 master-1 kubenswrapper[4740]: I1014 13:38:54.855300 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j77qg\" (UniqueName: \"kubernetes.io/projected/f0f37760-d0d3-44d0-b4b2-88095f10222f-kube-api-access-j77qg\") pod \"f0f37760-d0d3-44d0-b4b2-88095f10222f\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " Oct 14 13:38:54.855602 master-1 kubenswrapper[4740]: I1014 13:38:54.855556 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-config-data\") pod \"f0f37760-d0d3-44d0-b4b2-88095f10222f\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " Oct 14 13:38:54.855661 master-1 kubenswrapper[4740]: I1014 13:38:54.855634 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-combined-ca-bundle\") pod \"f0f37760-d0d3-44d0-b4b2-88095f10222f\" (UID: \"f0f37760-d0d3-44d0-b4b2-88095f10222f\") " Oct 14 13:38:54.858902 master-1 kubenswrapper[4740]: I1014 13:38:54.858831 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f37760-d0d3-44d0-b4b2-88095f10222f-kube-api-access-j77qg" (OuterVolumeSpecName: "kube-api-access-j77qg") pod "f0f37760-d0d3-44d0-b4b2-88095f10222f" (UID: "f0f37760-d0d3-44d0-b4b2-88095f10222f"). InnerVolumeSpecName "kube-api-access-j77qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:54.897925 master-1 kubenswrapper[4740]: I1014 13:38:54.897854 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0f37760-d0d3-44d0-b4b2-88095f10222f" (UID: "f0f37760-d0d3-44d0-b4b2-88095f10222f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:54.934732 master-1 kubenswrapper[4740]: I1014 13:38:54.934633 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-config-data" (OuterVolumeSpecName: "config-data") pod "f0f37760-d0d3-44d0-b4b2-88095f10222f" (UID: "f0f37760-d0d3-44d0-b4b2-88095f10222f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:38:54.957783 master-1 kubenswrapper[4740]: I1014 13:38:54.957731 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j77qg\" (UniqueName: \"kubernetes.io/projected/f0f37760-d0d3-44d0-b4b2-88095f10222f-kube-api-access-j77qg\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:54.957783 master-1 kubenswrapper[4740]: I1014 13:38:54.957772 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:54.957783 master-1 kubenswrapper[4740]: I1014 13:38:54.957785 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f37760-d0d3-44d0-b4b2-88095f10222f-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:38:55.504480 master-1 kubenswrapper[4740]: I1014 13:38:55.504310 4740 generic.go:334] "Generic (PLEG): container finished" podID="fff845e7-62de-421e-80e6-e85408dc48be" containerID="cef992ea89ebf50dfda1bc86a49651108b422ad03faad05e23bbfc6a2cb69887" exitCode=0 Oct 14 13:38:55.504480 master-1 kubenswrapper[4740]: I1014 13:38:55.504381 4740 generic.go:334] "Generic (PLEG): container finished" podID="fff845e7-62de-421e-80e6-e85408dc48be" containerID="0d8c04eeb24406133e1f3253a384b5a4b674f78fe6ddf5492016e2eb638c6b46" exitCode=0 Oct 14 13:38:55.504480 master-1 kubenswrapper[4740]: I1014 13:38:55.504391 4740 generic.go:334] "Generic (PLEG): container finished" podID="fff845e7-62de-421e-80e6-e85408dc48be" containerID="5bac30b0bf2098c76e92a87b1a38be9c96fdd5009184f6401cb841effecb9d35" exitCode=0 Oct 14 13:38:55.504480 master-1 kubenswrapper[4740]: I1014 13:38:55.504436 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerDied","Data":"cef992ea89ebf50dfda1bc86a49651108b422ad03faad05e23bbfc6a2cb69887"} Oct 14 13:38:55.504480 master-1 kubenswrapper[4740]: I1014 13:38:55.504492 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerDied","Data":"0d8c04eeb24406133e1f3253a384b5a4b674f78fe6ddf5492016e2eb638c6b46"} Oct 14 13:38:55.505466 master-1 kubenswrapper[4740]: I1014 13:38:55.504504 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerDied","Data":"5bac30b0bf2098c76e92a87b1a38be9c96fdd5009184f6401cb841effecb9d35"} Oct 14 13:38:55.508096 master-1 kubenswrapper[4740]: I1014 13:38:55.508051 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"f0f37760-d0d3-44d0-b4b2-88095f10222f","Type":"ContainerDied","Data":"ba2444aaae52108b11402b60ca38d4d8d6c7173849af54861240ae200855a9e0"} Oct 14 13:38:55.508271 master-1 kubenswrapper[4740]: I1014 13:38:55.508118 4740 scope.go:117] "RemoveContainer" containerID="dabf36a612e5327ab76517bfa400b882cf8ae5162383fc394bfecd3a1bd05c57" Oct 14 13:38:55.508435 master-1 kubenswrapper[4740]: I1014 13:38:55.508375 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:38:55.573141 master-1 kubenswrapper[4740]: I1014 13:38:55.573035 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:55.589104 master-1 kubenswrapper[4740]: I1014 13:38:55.589039 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:55.602197 master-1 kubenswrapper[4740]: I1014 13:38:55.602122 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:55.602721 master-1 kubenswrapper[4740]: E1014 13:38:55.602690 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f37760-d0d3-44d0-b4b2-88095f10222f" containerName="nova-scheduler-scheduler" Oct 14 13:38:55.602721 master-1 kubenswrapper[4740]: I1014 13:38:55.602716 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f37760-d0d3-44d0-b4b2-88095f10222f" containerName="nova-scheduler-scheduler" Oct 14 13:38:55.602828 master-1 kubenswrapper[4740]: E1014 13:38:55.602734 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerName="dnsmasq-dns" Oct 14 13:38:55.602828 master-1 kubenswrapper[4740]: I1014 13:38:55.602743 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerName="dnsmasq-dns" Oct 14 13:38:55.602828 master-1 kubenswrapper[4740]: E1014 13:38:55.602772 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerName="init" Oct 14 13:38:55.602828 master-1 kubenswrapper[4740]: I1014 13:38:55.602782 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerName="init" Oct 14 13:38:55.603096 master-1 kubenswrapper[4740]: I1014 13:38:55.603062 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f37760-d0d3-44d0-b4b2-88095f10222f" containerName="nova-scheduler-scheduler" Oct 14 13:38:55.603154 master-1 kubenswrapper[4740]: I1014 13:38:55.603106 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40f97f4-5012-4f9c-bb3b-5bb53d3544be" containerName="dnsmasq-dns" Oct 14 13:38:55.604279 master-1 kubenswrapper[4740]: I1014 13:38:55.604221 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:38:55.610093 master-1 kubenswrapper[4740]: I1014 13:38:55.610045 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 13:38:55.611472 master-1 kubenswrapper[4740]: I1014 13:38:55.611436 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:55.679054 master-1 kubenswrapper[4740]: I1014 13:38:55.678978 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-config-data\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.679286 master-1 kubenswrapper[4740]: I1014 13:38:55.679071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.679537 master-1 kubenswrapper[4740]: I1014 13:38:55.679458 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2zk\" (UniqueName: \"kubernetes.io/projected/4ba8d44d-71cf-454e-84fc-50f5c917f079-kube-api-access-qq2zk\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.782289 master-1 kubenswrapper[4740]: I1014 13:38:55.782058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq2zk\" (UniqueName: \"kubernetes.io/projected/4ba8d44d-71cf-454e-84fc-50f5c917f079-kube-api-access-qq2zk\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.782289 master-1 kubenswrapper[4740]: I1014 13:38:55.782182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-config-data\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.782671 master-1 kubenswrapper[4740]: I1014 13:38:55.782324 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.787981 master-1 kubenswrapper[4740]: I1014 13:38:55.787922 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.788725 master-1 kubenswrapper[4740]: I1014 13:38:55.788449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-config-data\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.796283 master-1 kubenswrapper[4740]: I1014 13:38:55.796189 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 14 13:38:55.796494 master-1 kubenswrapper[4740]: I1014 13:38:55.796311 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 14 13:38:55.810269 master-1 kubenswrapper[4740]: I1014 13:38:55.810153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq2zk\" (UniqueName: \"kubernetes.io/projected/4ba8d44d-71cf-454e-84fc-50f5c917f079-kube-api-access-qq2zk\") pod \"nova-scheduler-1\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " pod="openstack/nova-scheduler-1" Oct 14 13:38:55.940609 master-1 kubenswrapper[4740]: I1014 13:38:55.940505 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:38:56.409756 master-1 kubenswrapper[4740]: I1014 13:38:56.409682 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:38:56.415365 master-1 kubenswrapper[4740]: W1014 13:38:56.414720 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ba8d44d_71cf_454e_84fc_50f5c917f079.slice/crio-5cacab3b1e65ce4b412affd246b28a675956ee2d06ecba672429ac8f2e964de9 WatchSource:0}: Error finding container 5cacab3b1e65ce4b412affd246b28a675956ee2d06ecba672429ac8f2e964de9: Status 404 returned error can't find the container with id 5cacab3b1e65ce4b412affd246b28a675956ee2d06ecba672429ac8f2e964de9 Oct 14 13:38:56.517970 master-1 kubenswrapper[4740]: I1014 13:38:56.517879 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4ba8d44d-71cf-454e-84fc-50f5c917f079","Type":"ContainerStarted","Data":"5cacab3b1e65ce4b412affd246b28a675956ee2d06ecba672429ac8f2e964de9"} Oct 14 13:38:56.839811 master-1 kubenswrapper[4740]: I1014 13:38:56.839733 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:38:56.880910 master-1 kubenswrapper[4740]: I1014 13:38:56.880822 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:38:56.944065 master-1 kubenswrapper[4740]: I1014 13:38:56.944000 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:38:56.944355 master-1 kubenswrapper[4740]: E1014 13:38:56.944318 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:38:56.956882 master-1 kubenswrapper[4740]: I1014 13:38:56.956818 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f37760-d0d3-44d0-b4b2-88095f10222f" path="/var/lib/kubelet/pods/f0f37760-d0d3-44d0-b4b2-88095f10222f/volumes" Oct 14 13:38:57.533908 master-1 kubenswrapper[4740]: I1014 13:38:57.533839 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4ba8d44d-71cf-454e-84fc-50f5c917f079","Type":"ContainerStarted","Data":"33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe"} Oct 14 13:38:57.569748 master-1 kubenswrapper[4740]: I1014 13:38:57.569659 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-1" podStartSLOduration=2.569635477 podStartE2EDuration="2.569635477s" podCreationTimestamp="2025-10-14 13:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:38:57.563788612 +0000 UTC m=+1963.374077961" watchObservedRunningTime="2025-10-14 13:38:57.569635477 +0000 UTC m=+1963.379924806" Oct 14 13:38:58.500095 master-1 kubenswrapper[4740]: I1014 13:38:58.500042 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:38:58.500701 master-1 kubenswrapper[4740]: I1014 13:38:58.500663 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-log" containerID="cri-o://bb6216313388627f07cc5f9d7f3fc804b44df3998a68a98115ab7d89403eecc4" gracePeriod=30 Oct 14 13:38:58.500839 master-1 kubenswrapper[4740]: I1014 13:38:58.500771 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-metadata" containerID="cri-o://8dd96197bc75e254b98fcd8d332a2bca0a60437b93e392c3892305ff01c6c560" gracePeriod=30 Oct 14 13:38:59.555956 master-1 kubenswrapper[4740]: I1014 13:38:59.555894 4740 generic.go:334] "Generic (PLEG): container finished" podID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerID="bb6216313388627f07cc5f9d7f3fc804b44df3998a68a98115ab7d89403eecc4" exitCode=143 Oct 14 13:38:59.555956 master-1 kubenswrapper[4740]: I1014 13:38:59.555960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1d0c6dc3-247f-42bf-bd48-265621b2c202","Type":"ContainerDied","Data":"bb6216313388627f07cc5f9d7f3fc804b44df3998a68a98115ab7d89403eecc4"} Oct 14 13:39:00.941401 master-1 kubenswrapper[4740]: I1014 13:39:00.941308 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-1" Oct 14 13:39:02.586081 master-1 kubenswrapper[4740]: I1014 13:39:02.585979 4740 generic.go:334] "Generic (PLEG): container finished" podID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerID="8dd96197bc75e254b98fcd8d332a2bca0a60437b93e392c3892305ff01c6c560" exitCode=0 Oct 14 13:39:02.587166 master-1 kubenswrapper[4740]: I1014 13:39:02.586078 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1d0c6dc3-247f-42bf-bd48-265621b2c202","Type":"ContainerDied","Data":"8dd96197bc75e254b98fcd8d332a2bca0a60437b93e392c3892305ff01c6c560"} Oct 14 13:39:02.587166 master-1 kubenswrapper[4740]: I1014 13:39:02.586158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"1d0c6dc3-247f-42bf-bd48-265621b2c202","Type":"ContainerDied","Data":"81c5921b264158e81e9e7eef723673ac6522b3b33877cb89230200e8788b1960"} Oct 14 13:39:02.587166 master-1 kubenswrapper[4740]: I1014 13:39:02.586175 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81c5921b264158e81e9e7eef723673ac6522b3b33877cb89230200e8788b1960" Oct 14 13:39:02.616447 master-1 kubenswrapper[4740]: I1014 13:39:02.616358 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:39:02.651290 master-1 kubenswrapper[4740]: I1014 13:39:02.651168 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-config-data\") pod \"1d0c6dc3-247f-42bf-bd48-265621b2c202\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " Oct 14 13:39:02.651776 master-1 kubenswrapper[4740]: I1014 13:39:02.651719 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r88j5\" (UniqueName: \"kubernetes.io/projected/1d0c6dc3-247f-42bf-bd48-265621b2c202-kube-api-access-r88j5\") pod \"1d0c6dc3-247f-42bf-bd48-265621b2c202\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " Oct 14 13:39:02.651889 master-1 kubenswrapper[4740]: I1014 13:39:02.651851 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-combined-ca-bundle\") pod \"1d0c6dc3-247f-42bf-bd48-265621b2c202\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " Oct 14 13:39:02.651965 master-1 kubenswrapper[4740]: I1014 13:39:02.651931 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0c6dc3-247f-42bf-bd48-265621b2c202-logs\") pod \"1d0c6dc3-247f-42bf-bd48-265621b2c202\" (UID: \"1d0c6dc3-247f-42bf-bd48-265621b2c202\") " Oct 14 13:39:02.652519 master-1 kubenswrapper[4740]: I1014 13:39:02.652454 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d0c6dc3-247f-42bf-bd48-265621b2c202-logs" (OuterVolumeSpecName: "logs") pod "1d0c6dc3-247f-42bf-bd48-265621b2c202" (UID: "1d0c6dc3-247f-42bf-bd48-265621b2c202"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:39:02.652961 master-1 kubenswrapper[4740]: I1014 13:39:02.652903 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d0c6dc3-247f-42bf-bd48-265621b2c202-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:02.655798 master-1 kubenswrapper[4740]: I1014 13:39:02.655709 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d0c6dc3-247f-42bf-bd48-265621b2c202-kube-api-access-r88j5" (OuterVolumeSpecName: "kube-api-access-r88j5") pod "1d0c6dc3-247f-42bf-bd48-265621b2c202" (UID: "1d0c6dc3-247f-42bf-bd48-265621b2c202"). InnerVolumeSpecName "kube-api-access-r88j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:02.674110 master-1 kubenswrapper[4740]: I1014 13:39:02.674022 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-config-data" (OuterVolumeSpecName: "config-data") pod "1d0c6dc3-247f-42bf-bd48-265621b2c202" (UID: "1d0c6dc3-247f-42bf-bd48-265621b2c202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:02.674462 master-1 kubenswrapper[4740]: I1014 13:39:02.674393 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d0c6dc3-247f-42bf-bd48-265621b2c202" (UID: "1d0c6dc3-247f-42bf-bd48-265621b2c202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:02.755267 master-1 kubenswrapper[4740]: I1014 13:39:02.755032 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r88j5\" (UniqueName: \"kubernetes.io/projected/1d0c6dc3-247f-42bf-bd48-265621b2c202-kube-api-access-r88j5\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:02.755267 master-1 kubenswrapper[4740]: I1014 13:39:02.755085 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:02.755267 master-1 kubenswrapper[4740]: I1014 13:39:02.755102 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d0c6dc3-247f-42bf-bd48-265621b2c202-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:03.598247 master-1 kubenswrapper[4740]: I1014 13:39:03.598148 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:39:03.638839 master-1 kubenswrapper[4740]: I1014 13:39:03.638762 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:39:03.651094 master-1 kubenswrapper[4740]: I1014 13:39:03.649832 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:39:03.717049 master-1 kubenswrapper[4740]: I1014 13:39:03.716982 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:39:03.718038 master-1 kubenswrapper[4740]: E1014 13:39:03.718009 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-log" Oct 14 13:39:03.718188 master-1 kubenswrapper[4740]: I1014 13:39:03.718169 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-log" Oct 14 13:39:03.718343 master-1 kubenswrapper[4740]: E1014 13:39:03.718322 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-metadata" Oct 14 13:39:03.718471 master-1 kubenswrapper[4740]: I1014 13:39:03.718450 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-metadata" Oct 14 13:39:03.718888 master-1 kubenswrapper[4740]: I1014 13:39:03.718862 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-log" Oct 14 13:39:03.719026 master-1 kubenswrapper[4740]: I1014 13:39:03.719006 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" containerName="nova-metadata-metadata" Oct 14 13:39:03.720449 master-1 kubenswrapper[4740]: I1014 13:39:03.720424 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:39:03.724909 master-1 kubenswrapper[4740]: I1014 13:39:03.723918 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:39:03.724909 master-1 kubenswrapper[4740]: I1014 13:39:03.724414 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 13:39:03.734257 master-1 kubenswrapper[4740]: I1014 13:39:03.734123 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:39:03.786539 master-1 kubenswrapper[4740]: I1014 13:39:03.786471 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgxhq\" (UniqueName: \"kubernetes.io/projected/034d010b-5277-4cbe-b908-94fef09db25d-kube-api-access-qgxhq\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.787051 master-1 kubenswrapper[4740]: I1014 13:39:03.787010 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.787330 master-1 kubenswrapper[4740]: I1014 13:39:03.787309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-config-data\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.787497 master-1 kubenswrapper[4740]: I1014 13:39:03.787482 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034d010b-5277-4cbe-b908-94fef09db25d-logs\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.787705 master-1 kubenswrapper[4740]: I1014 13:39:03.787686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.889209 master-1 kubenswrapper[4740]: I1014 13:39:03.889047 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgxhq\" (UniqueName: \"kubernetes.io/projected/034d010b-5277-4cbe-b908-94fef09db25d-kube-api-access-qgxhq\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.889209 master-1 kubenswrapper[4740]: I1014 13:39:03.889097 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.889209 master-1 kubenswrapper[4740]: I1014 13:39:03.889164 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-config-data\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.889546 master-1 kubenswrapper[4740]: I1014 13:39:03.889220 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034d010b-5277-4cbe-b908-94fef09db25d-logs\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.889546 master-1 kubenswrapper[4740]: I1014 13:39:03.889323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.890541 master-1 kubenswrapper[4740]: I1014 13:39:03.890465 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034d010b-5277-4cbe-b908-94fef09db25d-logs\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.893946 master-1 kubenswrapper[4740]: I1014 13:39:03.893911 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.894047 master-1 kubenswrapper[4740]: I1014 13:39:03.893995 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.894829 master-1 kubenswrapper[4740]: I1014 13:39:03.894773 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-config-data\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:03.922641 master-1 kubenswrapper[4740]: I1014 13:39:03.922610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgxhq\" (UniqueName: \"kubernetes.io/projected/034d010b-5277-4cbe-b908-94fef09db25d-kube-api-access-qgxhq\") pod \"nova-metadata-1\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " pod="openstack/nova-metadata-1" Oct 14 13:39:04.054448 master-1 kubenswrapper[4740]: I1014 13:39:04.054369 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:39:04.564998 master-1 kubenswrapper[4740]: I1014 13:39:04.564924 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:39:04.609141 master-1 kubenswrapper[4740]: I1014 13:39:04.608454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"034d010b-5277-4cbe-b908-94fef09db25d","Type":"ContainerStarted","Data":"7b1f3b7ebf468e68da3fefa3cbd625f578a3852318493dfc9bbfbcbae6780bf1"} Oct 14 13:39:04.960070 master-1 kubenswrapper[4740]: I1014 13:39:04.959996 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d0c6dc3-247f-42bf-bd48-265621b2c202" path="/var/lib/kubelet/pods/1d0c6dc3-247f-42bf-bd48-265621b2c202/volumes" Oct 14 13:39:05.621524 master-1 kubenswrapper[4740]: I1014 13:39:05.621450 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"034d010b-5277-4cbe-b908-94fef09db25d","Type":"ContainerStarted","Data":"2955006b356315d3247efbd601e1d531451e33a4defb0d38baa1bd4af2a10d6a"} Oct 14 13:39:05.621524 master-1 kubenswrapper[4740]: I1014 13:39:05.621506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"034d010b-5277-4cbe-b908-94fef09db25d","Type":"ContainerStarted","Data":"9ea58968249d52450e1ea1f1a4cdbcf459bdfa17a8c2c12c971de75a7ca16b7e"} Oct 14 13:39:05.648581 master-1 kubenswrapper[4740]: I1014 13:39:05.648502 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-1" podStartSLOduration=2.648484163 podStartE2EDuration="2.648484163s" podCreationTimestamp="2025-10-14 13:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:39:05.64645613 +0000 UTC m=+1971.456745459" watchObservedRunningTime="2025-10-14 13:39:05.648484163 +0000 UTC m=+1971.458773492" Oct 14 13:39:05.799943 master-1 kubenswrapper[4740]: I1014 13:39:05.799842 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 14 13:39:05.800374 master-1 kubenswrapper[4740]: I1014 13:39:05.800071 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 14 13:39:05.800706 master-1 kubenswrapper[4740]: I1014 13:39:05.800639 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 14 13:39:05.800790 master-1 kubenswrapper[4740]: I1014 13:39:05.800724 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 14 13:39:05.803007 master-1 kubenswrapper[4740]: I1014 13:39:05.802948 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 14 13:39:05.803540 master-1 kubenswrapper[4740]: I1014 13:39:05.803513 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 14 13:39:05.941387 master-1 kubenswrapper[4740]: I1014 13:39:05.941299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-1" Oct 14 13:39:05.988631 master-1 kubenswrapper[4740]: I1014 13:39:05.988006 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-1" Oct 14 13:39:06.658396 master-1 kubenswrapper[4740]: I1014 13:39:06.658331 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-1" Oct 14 13:39:09.055264 master-1 kubenswrapper[4740]: I1014 13:39:09.055137 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 14 13:39:09.056455 master-1 kubenswrapper[4740]: I1014 13:39:09.055315 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 14 13:39:11.944692 master-1 kubenswrapper[4740]: I1014 13:39:11.944645 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:39:11.945275 master-1 kubenswrapper[4740]: E1014 13:39:11.945054 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:39:12.059309 master-1 kubenswrapper[4740]: I1014 13:39:12.059241 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:12.059630 master-1 kubenswrapper[4740]: I1014 13:39:12.059524 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-log" containerID="cri-o://bc2f610a777baef297a925b65687d8569ac1963fffb5986993ce2fdd4d44bc07" gracePeriod=30 Oct 14 13:39:12.060100 master-1 kubenswrapper[4740]: I1014 13:39:12.060058 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-api" containerID="cri-o://7bb2883e4220cfe413d82f380b03780663e46f55d51c617e8eaf7143cfd7e258" gracePeriod=30 Oct 14 13:39:12.721195 master-1 kubenswrapper[4740]: I1014 13:39:12.721080 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerID="bc2f610a777baef297a925b65687d8569ac1963fffb5986993ce2fdd4d44bc07" exitCode=143 Oct 14 13:39:12.721195 master-1 kubenswrapper[4740]: I1014 13:39:12.721129 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a","Type":"ContainerDied","Data":"bc2f610a777baef297a925b65687d8569ac1963fffb5986993ce2fdd4d44bc07"} Oct 14 13:39:13.735605 master-1 kubenswrapper[4740]: I1014 13:39:13.735489 4740 generic.go:334] "Generic (PLEG): container finished" podID="e497759a-6e7f-423b-b8f7-9f52606d2ec3" containerID="aa4018bca6359ae54f263cbc8c8b2561130f68dd4d8f59a779befef271e6d2bc" exitCode=137 Oct 14 13:39:13.735605 master-1 kubenswrapper[4740]: I1014 13:39:13.735577 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e497759a-6e7f-423b-b8f7-9f52606d2ec3","Type":"ContainerDied","Data":"aa4018bca6359ae54f263cbc8c8b2561130f68dd4d8f59a779befef271e6d2bc"} Oct 14 13:39:14.054710 master-1 kubenswrapper[4740]: I1014 13:39:14.054545 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 14 13:39:14.055473 master-1 kubenswrapper[4740]: I1014 13:39:14.055421 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 14 13:39:14.148854 master-1 kubenswrapper[4740]: I1014 13:39:14.148772 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.226960 master-1 kubenswrapper[4740]: I1014 13:39:14.226882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-combined-ca-bundle\") pod \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " Oct 14 13:39:14.227211 master-1 kubenswrapper[4740]: I1014 13:39:14.227041 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69b9n\" (UniqueName: \"kubernetes.io/projected/e497759a-6e7f-423b-b8f7-9f52606d2ec3-kube-api-access-69b9n\") pod \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " Oct 14 13:39:14.227211 master-1 kubenswrapper[4740]: I1014 13:39:14.227158 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-config-data\") pod \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\" (UID: \"e497759a-6e7f-423b-b8f7-9f52606d2ec3\") " Oct 14 13:39:14.232708 master-1 kubenswrapper[4740]: I1014 13:39:14.232654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e497759a-6e7f-423b-b8f7-9f52606d2ec3-kube-api-access-69b9n" (OuterVolumeSpecName: "kube-api-access-69b9n") pod "e497759a-6e7f-423b-b8f7-9f52606d2ec3" (UID: "e497759a-6e7f-423b-b8f7-9f52606d2ec3"). InnerVolumeSpecName "kube-api-access-69b9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:14.255657 master-1 kubenswrapper[4740]: I1014 13:39:14.255590 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-config-data" (OuterVolumeSpecName: "config-data") pod "e497759a-6e7f-423b-b8f7-9f52606d2ec3" (UID: "e497759a-6e7f-423b-b8f7-9f52606d2ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:14.298449 master-1 kubenswrapper[4740]: I1014 13:39:14.298380 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e497759a-6e7f-423b-b8f7-9f52606d2ec3" (UID: "e497759a-6e7f-423b-b8f7-9f52606d2ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:14.330950 master-1 kubenswrapper[4740]: I1014 13:39:14.330802 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:14.330950 master-1 kubenswrapper[4740]: I1014 13:39:14.330887 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e497759a-6e7f-423b-b8f7-9f52606d2ec3-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:14.330950 master-1 kubenswrapper[4740]: I1014 13:39:14.330903 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69b9n\" (UniqueName: \"kubernetes.io/projected/e497759a-6e7f-423b-b8f7-9f52606d2ec3-kube-api-access-69b9n\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:14.747763 master-1 kubenswrapper[4740]: I1014 13:39:14.747685 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e497759a-6e7f-423b-b8f7-9f52606d2ec3","Type":"ContainerDied","Data":"f979477cc889452c27f8fe562bf8cad5a5968eb5b435d290c9c7b9b07411c45d"} Oct 14 13:39:14.747763 master-1 kubenswrapper[4740]: I1014 13:39:14.747737 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.752866 master-1 kubenswrapper[4740]: I1014 13:39:14.747774 4740 scope.go:117] "RemoveContainer" containerID="aa4018bca6359ae54f263cbc8c8b2561130f68dd4d8f59a779befef271e6d2bc" Oct 14 13:39:14.813700 master-1 kubenswrapper[4740]: I1014 13:39:14.813638 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:39:14.832160 master-1 kubenswrapper[4740]: I1014 13:39:14.832102 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:39:14.879629 master-1 kubenswrapper[4740]: I1014 13:39:14.879567 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:39:14.880182 master-1 kubenswrapper[4740]: E1014 13:39:14.880153 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e497759a-6e7f-423b-b8f7-9f52606d2ec3" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 13:39:14.880182 master-1 kubenswrapper[4740]: I1014 13:39:14.880180 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e497759a-6e7f-423b-b8f7-9f52606d2ec3" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 13:39:14.880777 master-1 kubenswrapper[4740]: I1014 13:39:14.880732 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e497759a-6e7f-423b-b8f7-9f52606d2ec3" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 13:39:14.882497 master-1 kubenswrapper[4740]: I1014 13:39:14.882419 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.885989 master-1 kubenswrapper[4740]: I1014 13:39:14.885594 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 14 13:39:14.886059 master-1 kubenswrapper[4740]: I1014 13:39:14.885594 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 14 13:39:14.887387 master-1 kubenswrapper[4740]: I1014 13:39:14.887342 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 13:39:14.907574 master-1 kubenswrapper[4740]: I1014 13:39:14.907290 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:39:14.949364 master-1 kubenswrapper[4740]: I1014 13:39:14.949313 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.949364 master-1 kubenswrapper[4740]: I1014 13:39:14.949375 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.949655 master-1 kubenswrapper[4740]: I1014 13:39:14.949417 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.949655 master-1 kubenswrapper[4740]: I1014 13:39:14.949566 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.949655 master-1 kubenswrapper[4740]: I1014 13:39:14.949584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6g7\" (UniqueName: \"kubernetes.io/projected/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-kube-api-access-cq6g7\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:14.977683 master-1 kubenswrapper[4740]: I1014 13:39:14.977629 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e497759a-6e7f-423b-b8f7-9f52606d2ec3" path="/var/lib/kubelet/pods/e497759a-6e7f-423b-b8f7-9f52606d2ec3/volumes" Oct 14 13:39:15.052472 master-1 kubenswrapper[4740]: I1014 13:39:15.052331 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.052472 master-1 kubenswrapper[4740]: I1014 13:39:15.052393 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6g7\" (UniqueName: \"kubernetes.io/projected/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-kube-api-access-cq6g7\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.052705 master-1 kubenswrapper[4740]: I1014 13:39:15.052487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.052705 master-1 kubenswrapper[4740]: I1014 13:39:15.052518 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.052705 master-1 kubenswrapper[4740]: I1014 13:39:15.052597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.056386 master-1 kubenswrapper[4740]: I1014 13:39:15.056211 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.059125 master-1 kubenswrapper[4740]: I1014 13:39:15.056935 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.059125 master-1 kubenswrapper[4740]: I1014 13:39:15.057005 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:15.059125 master-1 kubenswrapper[4740]: I1014 13:39:15.057262 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.180:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:15.061391 master-1 kubenswrapper[4740]: I1014 13:39:15.060518 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.099600 master-1 kubenswrapper[4740]: I1014 13:39:15.082465 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.099600 master-1 kubenswrapper[4740]: I1014 13:39:15.093320 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6g7\" (UniqueName: \"kubernetes.io/projected/976cec1b-1fcd-4401-9f66-91ced0e2fa2f-kube-api-access-cq6g7\") pod \"nova-cell1-novncproxy-0\" (UID: \"976cec1b-1fcd-4401-9f66-91ced0e2fa2f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.207502 master-1 kubenswrapper[4740]: I1014 13:39:15.207438 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:15.759816 master-1 kubenswrapper[4740]: I1014 13:39:15.759751 4740 generic.go:334] "Generic (PLEG): container finished" podID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerID="7bb2883e4220cfe413d82f380b03780663e46f55d51c617e8eaf7143cfd7e258" exitCode=0 Oct 14 13:39:15.759816 master-1 kubenswrapper[4740]: I1014 13:39:15.759803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a","Type":"ContainerDied","Data":"7bb2883e4220cfe413d82f380b03780663e46f55d51c617e8eaf7143cfd7e258"} Oct 14 13:39:16.007606 master-1 kubenswrapper[4740]: I1014 13:39:16.007511 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:39:16.357701 master-1 kubenswrapper[4740]: I1014 13:39:16.357637 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:16.488093 master-1 kubenswrapper[4740]: I1014 13:39:16.488006 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-logs\") pod \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " Oct 14 13:39:16.488423 master-1 kubenswrapper[4740]: I1014 13:39:16.488207 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-combined-ca-bundle\") pod \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " Oct 14 13:39:16.488423 master-1 kubenswrapper[4740]: I1014 13:39:16.488336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-config-data\") pod \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " Oct 14 13:39:16.488423 master-1 kubenswrapper[4740]: I1014 13:39:16.488385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtmq\" (UniqueName: \"kubernetes.io/projected/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-kube-api-access-xvtmq\") pod \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\" (UID: \"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a\") " Oct 14 13:39:16.488680 master-1 kubenswrapper[4740]: I1014 13:39:16.488548 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-logs" (OuterVolumeSpecName: "logs") pod "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" (UID: "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:39:16.489195 master-1 kubenswrapper[4740]: I1014 13:39:16.489151 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:16.493922 master-1 kubenswrapper[4740]: I1014 13:39:16.493840 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-kube-api-access-xvtmq" (OuterVolumeSpecName: "kube-api-access-xvtmq") pod "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" (UID: "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a"). InnerVolumeSpecName "kube-api-access-xvtmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:16.516683 master-1 kubenswrapper[4740]: I1014 13:39:16.516596 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" (UID: "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:16.530259 master-1 kubenswrapper[4740]: I1014 13:39:16.530176 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-config-data" (OuterVolumeSpecName: "config-data") pod "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" (UID: "2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:16.590598 master-1 kubenswrapper[4740]: I1014 13:39:16.590456 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:16.590598 master-1 kubenswrapper[4740]: I1014 13:39:16.590499 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:16.590598 master-1 kubenswrapper[4740]: I1014 13:39:16.590510 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtmq\" (UniqueName: \"kubernetes.io/projected/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a-kube-api-access-xvtmq\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:16.791315 master-1 kubenswrapper[4740]: I1014 13:39:16.790773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"976cec1b-1fcd-4401-9f66-91ced0e2fa2f","Type":"ContainerStarted","Data":"46ef6a26d70ce625f67d40af664e0fd9a394c66397c8789fe6a8fe6841213208"} Oct 14 13:39:16.791315 master-1 kubenswrapper[4740]: I1014 13:39:16.791286 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"976cec1b-1fcd-4401-9f66-91ced0e2fa2f","Type":"ContainerStarted","Data":"bd4b2c0e1ff18d215a19c4d2386a3abf12913eaeeca142b0eed77b1e7ff233c8"} Oct 14 13:39:16.796502 master-1 kubenswrapper[4740]: I1014 13:39:16.796423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a","Type":"ContainerDied","Data":"ea8b56b14c43c75c14b8b7c85c0c575c9a23c67743c929f6babe38b2019cee55"} Oct 14 13:39:16.796502 master-1 kubenswrapper[4740]: I1014 13:39:16.796500 4740 scope.go:117] "RemoveContainer" containerID="7bb2883e4220cfe413d82f380b03780663e46f55d51c617e8eaf7143cfd7e258" Oct 14 13:39:16.796786 master-1 kubenswrapper[4740]: I1014 13:39:16.796680 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:16.818331 master-1 kubenswrapper[4740]: I1014 13:39:16.818225 4740 scope.go:117] "RemoveContainer" containerID="bc2f610a777baef297a925b65687d8569ac1963fffb5986993ce2fdd4d44bc07" Oct 14 13:39:16.965044 master-1 kubenswrapper[4740]: I1014 13:39:16.964870 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.964845018 podStartE2EDuration="2.964845018s" podCreationTimestamp="2025-10-14 13:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:39:16.963006668 +0000 UTC m=+1982.773296037" watchObservedRunningTime="2025-10-14 13:39:16.964845018 +0000 UTC m=+1982.775134347" Oct 14 13:39:17.031111 master-1 kubenswrapper[4740]: I1014 13:39:17.031030 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cd59f759-c7xdl"] Oct 14 13:39:17.035331 master-1 kubenswrapper[4740]: I1014 13:39:17.035157 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerName="dnsmasq-dns" containerID="cri-o://9aa3890420154879771cf70d24c10f047f249594821c63f9104e47beefee1c06" gracePeriod=10 Oct 14 13:39:17.051413 master-1 kubenswrapper[4740]: I1014 13:39:17.051338 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-559c6f967c-vn2sd"] Oct 14 13:39:17.051739 master-1 kubenswrapper[4740]: E1014 13:39:17.051698 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-log" Oct 14 13:39:17.051739 master-1 kubenswrapper[4740]: I1014 13:39:17.051719 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-log" Oct 14 13:39:17.051896 master-1 kubenswrapper[4740]: E1014 13:39:17.051747 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-api" Oct 14 13:39:17.051896 master-1 kubenswrapper[4740]: I1014 13:39:17.051754 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-api" Oct 14 13:39:17.052030 master-1 kubenswrapper[4740]: I1014 13:39:17.051903 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-api" Oct 14 13:39:17.052030 master-1 kubenswrapper[4740]: I1014 13:39:17.051922 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-log" Oct 14 13:39:17.053006 master-1 kubenswrapper[4740]: I1014 13:39:17.052969 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.148068 master-1 kubenswrapper[4740]: I1014 13:39:17.147959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559c6f967c-vn2sd"] Oct 14 13:39:17.203489 master-1 kubenswrapper[4740]: I1014 13:39:17.203387 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:17.204443 master-1 kubenswrapper[4740]: I1014 13:39:17.204369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-ovsdbserver-nb\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.204564 master-1 kubenswrapper[4740]: I1014 13:39:17.204508 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-dns-svc\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.204642 master-1 kubenswrapper[4740]: I1014 13:39:17.204584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-ovsdbserver-sb\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.204723 master-1 kubenswrapper[4740]: I1014 13:39:17.204660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjw5\" (UniqueName: \"kubernetes.io/projected/7d99cb2a-687e-4772-a5a8-828439312af7-kube-api-access-qkjw5\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.204855 master-1 kubenswrapper[4740]: I1014 13:39:17.204813 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-config\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.204925 master-1 kubenswrapper[4740]: I1014 13:39:17.204880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-dns-swift-storage-0\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.309433 master-1 kubenswrapper[4740]: I1014 13:39:17.307073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-config\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.309943 master-1 kubenswrapper[4740]: I1014 13:39:17.309328 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:17.310011 master-1 kubenswrapper[4740]: I1014 13:39:17.309940 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-config\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.310088 master-1 kubenswrapper[4740]: I1014 13:39:17.309916 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-dns-swift-storage-0\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.310910 master-1 kubenswrapper[4740]: I1014 13:39:17.310886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-ovsdbserver-nb\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.311066 master-1 kubenswrapper[4740]: I1014 13:39:17.311044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-dns-svc\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.311265 master-1 kubenswrapper[4740]: I1014 13:39:17.311244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-ovsdbserver-sb\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.311455 master-1 kubenswrapper[4740]: I1014 13:39:17.311437 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjw5\" (UniqueName: \"kubernetes.io/projected/7d99cb2a-687e-4772-a5a8-828439312af7-kube-api-access-qkjw5\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.312085 master-1 kubenswrapper[4740]: I1014 13:39:17.312055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-ovsdbserver-nb\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.312168 master-1 kubenswrapper[4740]: I1014 13:39:17.311096 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-dns-swift-storage-0\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.315084 master-1 kubenswrapper[4740]: I1014 13:39:17.315058 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-dns-svc\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.315733 master-1 kubenswrapper[4740]: I1014 13:39:17.315683 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d99cb2a-687e-4772-a5a8-828439312af7-ovsdbserver-sb\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.482854 master-1 kubenswrapper[4740]: I1014 13:39:17.476957 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:17.482854 master-1 kubenswrapper[4740]: I1014 13:39:17.479524 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:17.485153 master-1 kubenswrapper[4740]: I1014 13:39:17.485083 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 13:39:17.485506 master-1 kubenswrapper[4740]: I1014 13:39:17.485470 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 13:39:17.485632 master-1 kubenswrapper[4740]: I1014 13:39:17.485569 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:39:17.596731 master-1 kubenswrapper[4740]: I1014 13:39:17.596672 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:17.599984 master-1 kubenswrapper[4740]: I1014 13:39:17.599914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjw5\" (UniqueName: \"kubernetes.io/projected/7d99cb2a-687e-4772-a5a8-828439312af7-kube-api-access-qkjw5\") pod \"dnsmasq-dns-559c6f967c-vn2sd\" (UID: \"7d99cb2a-687e-4772-a5a8-828439312af7\") " pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.618985 master-1 kubenswrapper[4740]: I1014 13:39:17.618909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9hcp\" (UniqueName: \"kubernetes.io/projected/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-kube-api-access-p9hcp\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.619211 master-1 kubenswrapper[4740]: I1014 13:39:17.619028 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-config-data\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.619211 master-1 kubenswrapper[4740]: I1014 13:39:17.619067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-logs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.619211 master-1 kubenswrapper[4740]: I1014 13:39:17.619109 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.619211 master-1 kubenswrapper[4740]: I1014 13:39:17.619154 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-internal-tls-certs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.619211 master-1 kubenswrapper[4740]: I1014 13:39:17.619180 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-public-tls-certs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.721061 master-1 kubenswrapper[4740]: I1014 13:39:17.720984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-public-tls-certs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.721310 master-1 kubenswrapper[4740]: I1014 13:39:17.721098 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9hcp\" (UniqueName: \"kubernetes.io/projected/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-kube-api-access-p9hcp\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.721310 master-1 kubenswrapper[4740]: I1014 13:39:17.721155 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-config-data\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.721310 master-1 kubenswrapper[4740]: I1014 13:39:17.721187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-logs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.721310 master-1 kubenswrapper[4740]: I1014 13:39:17.721214 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.721310 master-1 kubenswrapper[4740]: I1014 13:39:17.721269 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-internal-tls-certs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.722162 master-1 kubenswrapper[4740]: I1014 13:39:17.722100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-logs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.724981 master-1 kubenswrapper[4740]: I1014 13:39:17.724945 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-internal-tls-certs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.725134 master-1 kubenswrapper[4740]: I1014 13:39:17.725090 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.729887 master-1 kubenswrapper[4740]: I1014 13:39:17.729850 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-public-tls-certs\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.736767 master-1 kubenswrapper[4740]: I1014 13:39:17.736734 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-config-data\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.813929 master-1 kubenswrapper[4740]: I1014 13:39:17.813877 4740 generic.go:334] "Generic (PLEG): container finished" podID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerID="9aa3890420154879771cf70d24c10f047f249594821c63f9104e47beefee1c06" exitCode=0 Oct 14 13:39:17.816943 master-1 kubenswrapper[4740]: I1014 13:39:17.814046 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" event={"ID":"c9e0f72f-8c88-4297-a690-dd519cb22ec5","Type":"ContainerDied","Data":"9aa3890420154879771cf70d24c10f047f249594821c63f9104e47beefee1c06"} Oct 14 13:39:17.817047 master-1 kubenswrapper[4740]: I1014 13:39:17.816392 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9hcp\" (UniqueName: \"kubernetes.io/projected/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-kube-api-access-p9hcp\") pod \"nova-api-2\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " pod="openstack/nova-api-2" Oct 14 13:39:17.830853 master-1 kubenswrapper[4740]: I1014 13:39:17.830702 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:17.989410 master-1 kubenswrapper[4740]: I1014 13:39:17.989207 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:39:18.114338 master-1 kubenswrapper[4740]: I1014 13:39:18.114256 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:18.131414 master-1 kubenswrapper[4740]: I1014 13:39:18.131335 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-sb\") pod \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " Oct 14 13:39:18.131636 master-1 kubenswrapper[4740]: I1014 13:39:18.131466 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kmj9\" (UniqueName: \"kubernetes.io/projected/c9e0f72f-8c88-4297-a690-dd519cb22ec5-kube-api-access-6kmj9\") pod \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " Oct 14 13:39:18.131636 master-1 kubenswrapper[4740]: I1014 13:39:18.131487 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-nb\") pod \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " Oct 14 13:39:18.131733 master-1 kubenswrapper[4740]: I1014 13:39:18.131636 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-svc\") pod \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " Oct 14 13:39:18.131733 master-1 kubenswrapper[4740]: I1014 13:39:18.131685 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-config\") pod \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " Oct 14 13:39:18.131733 master-1 kubenswrapper[4740]: I1014 13:39:18.131722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-swift-storage-0\") pod \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\" (UID: \"c9e0f72f-8c88-4297-a690-dd519cb22ec5\") " Oct 14 13:39:18.143689 master-1 kubenswrapper[4740]: I1014 13:39:18.143627 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e0f72f-8c88-4297-a690-dd519cb22ec5-kube-api-access-6kmj9" (OuterVolumeSpecName: "kube-api-access-6kmj9") pod "c9e0f72f-8c88-4297-a690-dd519cb22ec5" (UID: "c9e0f72f-8c88-4297-a690-dd519cb22ec5"). InnerVolumeSpecName "kube-api-access-6kmj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:18.219389 master-1 kubenswrapper[4740]: I1014 13:39:18.206801 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c9e0f72f-8c88-4297-a690-dd519cb22ec5" (UID: "c9e0f72f-8c88-4297-a690-dd519cb22ec5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:39:18.219389 master-1 kubenswrapper[4740]: I1014 13:39:18.208036 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-config" (OuterVolumeSpecName: "config") pod "c9e0f72f-8c88-4297-a690-dd519cb22ec5" (UID: "c9e0f72f-8c88-4297-a690-dd519cb22ec5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:39:18.221390 master-1 kubenswrapper[4740]: I1014 13:39:18.221320 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9e0f72f-8c88-4297-a690-dd519cb22ec5" (UID: "c9e0f72f-8c88-4297-a690-dd519cb22ec5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:39:18.244698 master-1 kubenswrapper[4740]: I1014 13:39:18.243822 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-config\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:18.244698 master-1 kubenswrapper[4740]: I1014 13:39:18.243855 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-swift-storage-0\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:18.244698 master-1 kubenswrapper[4740]: I1014 13:39:18.243865 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kmj9\" (UniqueName: \"kubernetes.io/projected/c9e0f72f-8c88-4297-a690-dd519cb22ec5-kube-api-access-6kmj9\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:18.244698 master-1 kubenswrapper[4740]: I1014 13:39:18.243876 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-nb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:18.247757 master-1 kubenswrapper[4740]: I1014 13:39:18.247712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9e0f72f-8c88-4297-a690-dd519cb22ec5" (UID: "c9e0f72f-8c88-4297-a690-dd519cb22ec5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:39:18.247864 master-1 kubenswrapper[4740]: I1014 13:39:18.247717 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9e0f72f-8c88-4297-a690-dd519cb22ec5" (UID: "c9e0f72f-8c88-4297-a690-dd519cb22ec5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:39:18.328586 master-1 kubenswrapper[4740]: I1014 13:39:18.325785 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559c6f967c-vn2sd"] Oct 14 13:39:18.330157 master-1 kubenswrapper[4740]: W1014 13:39:18.329637 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d99cb2a_687e_4772_a5a8_828439312af7.slice/crio-45802073dd1a0858ed3bff6ffbfdf7fc5834cfe4013c277c6f08cac480333b93 WatchSource:0}: Error finding container 45802073dd1a0858ed3bff6ffbfdf7fc5834cfe4013c277c6f08cac480333b93: Status 404 returned error can't find the container with id 45802073dd1a0858ed3bff6ffbfdf7fc5834cfe4013c277c6f08cac480333b93 Oct 14 13:39:18.345807 master-1 kubenswrapper[4740]: I1014 13:39:18.345766 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-ovsdbserver-sb\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:18.345807 master-1 kubenswrapper[4740]: I1014 13:39:18.345804 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9e0f72f-8c88-4297-a690-dd519cb22ec5-dns-svc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:18.585522 master-1 kubenswrapper[4740]: I1014 13:39:18.585470 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:18.832737 master-1 kubenswrapper[4740]: I1014 13:39:18.832687 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"0e1ed27e-52e0-4e0c-b5e0-7175f483e357","Type":"ContainerStarted","Data":"dafe6baecdbb0069acb44c5aa6444b8034f4aabde4a93f6db8242f3f237e8ab1"} Oct 14 13:39:18.832737 master-1 kubenswrapper[4740]: I1014 13:39:18.832737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"0e1ed27e-52e0-4e0c-b5e0-7175f483e357","Type":"ContainerStarted","Data":"9907f06576e79f163e9e402b8b62801fc2209887ed84522588ef477db508522c"} Oct 14 13:39:18.835131 master-1 kubenswrapper[4740]: I1014 13:39:18.835107 4740 generic.go:334] "Generic (PLEG): container finished" podID="7d99cb2a-687e-4772-a5a8-828439312af7" containerID="ae4d9b0686f3b29170c93a03a901fdd748d9a005e8149429df87e93e3666845b" exitCode=0 Oct 14 13:39:18.835194 master-1 kubenswrapper[4740]: I1014 13:39:18.835158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" event={"ID":"7d99cb2a-687e-4772-a5a8-828439312af7","Type":"ContainerDied","Data":"ae4d9b0686f3b29170c93a03a901fdd748d9a005e8149429df87e93e3666845b"} Oct 14 13:39:18.835194 master-1 kubenswrapper[4740]: I1014 13:39:18.835173 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" event={"ID":"7d99cb2a-687e-4772-a5a8-828439312af7","Type":"ContainerStarted","Data":"45802073dd1a0858ed3bff6ffbfdf7fc5834cfe4013c277c6f08cac480333b93"} Oct 14 13:39:18.838179 master-1 kubenswrapper[4740]: I1014 13:39:18.838147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" event={"ID":"c9e0f72f-8c88-4297-a690-dd519cb22ec5","Type":"ContainerDied","Data":"288dc818a52e34d75efc12a8c987dc4405e8e0836ba95f6e0b8b9250ca47d3f4"} Oct 14 13:39:18.838268 master-1 kubenswrapper[4740]: I1014 13:39:18.838203 4740 scope.go:117] "RemoveContainer" containerID="9aa3890420154879771cf70d24c10f047f249594821c63f9104e47beefee1c06" Oct 14 13:39:18.838326 master-1 kubenswrapper[4740]: I1014 13:39:18.838285 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd59f759-c7xdl" Oct 14 13:39:18.877968 master-1 kubenswrapper[4740]: I1014 13:39:18.877916 4740 scope.go:117] "RemoveContainer" containerID="461e0146f1ea40b9a3f5f4aef2fea3cfa251134721cf2c31a7102aa2b4eafb4a" Oct 14 13:39:18.916693 master-1 kubenswrapper[4740]: I1014 13:39:18.916640 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cd59f759-c7xdl"] Oct 14 13:39:18.933833 master-1 kubenswrapper[4740]: I1014 13:39:18.933748 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cd59f759-c7xdl"] Oct 14 13:39:18.956736 master-1 kubenswrapper[4740]: I1014 13:39:18.956686 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" path="/var/lib/kubelet/pods/2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a/volumes" Oct 14 13:39:18.960334 master-1 kubenswrapper[4740]: I1014 13:39:18.958314 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" path="/var/lib/kubelet/pods/c9e0f72f-8c88-4297-a690-dd519cb22ec5/volumes" Oct 14 13:39:19.849255 master-1 kubenswrapper[4740]: I1014 13:39:19.849174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" event={"ID":"7d99cb2a-687e-4772-a5a8-828439312af7","Type":"ContainerStarted","Data":"4006f49cf461836007e42cc51db1a1abc861b2a4fae8c1815b75d4ffcbfedb95"} Oct 14 13:39:19.850129 master-1 kubenswrapper[4740]: I1014 13:39:19.849363 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:19.869779 master-1 kubenswrapper[4740]: I1014 13:39:19.869701 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"0e1ed27e-52e0-4e0c-b5e0-7175f483e357","Type":"ContainerStarted","Data":"c9c63ed106e6d9d59aac3dd870fcb8fa67c4219f86e433f7ff70e5e7a0b54645"} Oct 14 13:39:20.007687 master-1 kubenswrapper[4740]: I1014 13:39:20.007559 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" podStartSLOduration=4.007530715 podStartE2EDuration="4.007530715s" podCreationTimestamp="2025-10-14 13:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:39:19.9919021 +0000 UTC m=+1985.802191449" watchObservedRunningTime="2025-10-14 13:39:20.007530715 +0000 UTC m=+1985.817820064" Oct 14 13:39:20.040743 master-1 kubenswrapper[4740]: I1014 13:39:20.039180 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=3.039158322 podStartE2EDuration="3.039158322s" podCreationTimestamp="2025-10-14 13:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:39:20.020265722 +0000 UTC m=+1985.830555071" watchObservedRunningTime="2025-10-14 13:39:20.039158322 +0000 UTC m=+1985.849447651" Oct 14 13:39:20.208541 master-1 kubenswrapper[4740]: I1014 13:39:20.208391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:24.061583 master-1 kubenswrapper[4740]: I1014 13:39:24.061518 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 14 13:39:24.063963 master-1 kubenswrapper[4740]: I1014 13:39:24.063930 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 14 13:39:24.066315 master-1 kubenswrapper[4740]: I1014 13:39:24.066278 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 14 13:39:24.936182 master-1 kubenswrapper[4740]: I1014 13:39:24.930523 4740 generic.go:334] "Generic (PLEG): container finished" podID="fff845e7-62de-421e-80e6-e85408dc48be" containerID="b9597e69ec4eb1049fc8bd28291d9c93eb820e02dd2554931f689c03f99af018" exitCode=137 Oct 14 13:39:24.936182 master-1 kubenswrapper[4740]: I1014 13:39:24.932011 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerDied","Data":"b9597e69ec4eb1049fc8bd28291d9c93eb820e02dd2554931f689c03f99af018"} Oct 14 13:39:24.939790 master-1 kubenswrapper[4740]: I1014 13:39:24.938698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 14 13:39:25.208713 master-1 kubenswrapper[4740]: I1014 13:39:25.208593 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:25.226800 master-1 kubenswrapper[4740]: I1014 13:39:25.226661 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:25.364276 master-1 kubenswrapper[4740]: I1014 13:39:25.364213 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 13:39:25.509982 master-1 kubenswrapper[4740]: I1014 13:39:25.509917 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-config-data\") pod \"fff845e7-62de-421e-80e6-e85408dc48be\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " Oct 14 13:39:25.509982 master-1 kubenswrapper[4740]: I1014 13:39:25.509966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-scripts\") pod \"fff845e7-62de-421e-80e6-e85408dc48be\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " Oct 14 13:39:25.510302 master-1 kubenswrapper[4740]: I1014 13:39:25.510057 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhn92\" (UniqueName: \"kubernetes.io/projected/fff845e7-62de-421e-80e6-e85408dc48be-kube-api-access-qhn92\") pod \"fff845e7-62de-421e-80e6-e85408dc48be\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " Oct 14 13:39:25.510302 master-1 kubenswrapper[4740]: I1014 13:39:25.510094 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-combined-ca-bundle\") pod \"fff845e7-62de-421e-80e6-e85408dc48be\" (UID: \"fff845e7-62de-421e-80e6-e85408dc48be\") " Oct 14 13:39:25.513301 master-1 kubenswrapper[4740]: I1014 13:39:25.513255 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-scripts" (OuterVolumeSpecName: "scripts") pod "fff845e7-62de-421e-80e6-e85408dc48be" (UID: "fff845e7-62de-421e-80e6-e85408dc48be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:25.515795 master-1 kubenswrapper[4740]: I1014 13:39:25.515729 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff845e7-62de-421e-80e6-e85408dc48be-kube-api-access-qhn92" (OuterVolumeSpecName: "kube-api-access-qhn92") pod "fff845e7-62de-421e-80e6-e85408dc48be" (UID: "fff845e7-62de-421e-80e6-e85408dc48be"). InnerVolumeSpecName "kube-api-access-qhn92". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:25.605535 master-1 kubenswrapper[4740]: I1014 13:39:25.605467 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-config-data" (OuterVolumeSpecName: "config-data") pod "fff845e7-62de-421e-80e6-e85408dc48be" (UID: "fff845e7-62de-421e-80e6-e85408dc48be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:25.607790 master-1 kubenswrapper[4740]: I1014 13:39:25.607740 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff845e7-62de-421e-80e6-e85408dc48be" (UID: "fff845e7-62de-421e-80e6-e85408dc48be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:25.612981 master-1 kubenswrapper[4740]: I1014 13:39:25.612940 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:25.612981 master-1 kubenswrapper[4740]: I1014 13:39:25.612968 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-scripts\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:25.612981 master-1 kubenswrapper[4740]: I1014 13:39:25.612981 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhn92\" (UniqueName: \"kubernetes.io/projected/fff845e7-62de-421e-80e6-e85408dc48be-kube-api-access-qhn92\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:25.613169 master-1 kubenswrapper[4740]: I1014 13:39:25.612992 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff845e7-62de-421e-80e6-e85408dc48be-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:25.944207 master-1 kubenswrapper[4740]: I1014 13:39:25.944097 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"fff845e7-62de-421e-80e6-e85408dc48be","Type":"ContainerDied","Data":"68bc81b87d27f2b06c0275e58d0e7d3ea07f9e2bc74633a86445fbf9063ca145"} Oct 14 13:39:25.944207 master-1 kubenswrapper[4740]: I1014 13:39:25.944210 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 13:39:25.944731 master-1 kubenswrapper[4740]: I1014 13:39:25.944308 4740 scope.go:117] "RemoveContainer" containerID="b9597e69ec4eb1049fc8bd28291d9c93eb820e02dd2554931f689c03f99af018" Oct 14 13:39:25.945729 master-1 kubenswrapper[4740]: I1014 13:39:25.945671 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:39:25.972678 master-1 kubenswrapper[4740]: I1014 13:39:25.972624 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:39:25.991215 master-1 kubenswrapper[4740]: I1014 13:39:25.991048 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 14 13:39:25.992672 master-1 kubenswrapper[4740]: I1014 13:39:25.992632 4740 scope.go:117] "RemoveContainer" containerID="cef992ea89ebf50dfda1bc86a49651108b422ad03faad05e23bbfc6a2cb69887" Oct 14 13:39:26.012342 master-1 kubenswrapper[4740]: I1014 13:39:26.012249 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 14 13:39:26.020386 master-1 kubenswrapper[4740]: I1014 13:39:26.020316 4740 scope.go:117] "RemoveContainer" containerID="0d8c04eeb24406133e1f3253a384b5a4b674f78fe6ddf5492016e2eb638c6b46" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053007 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: E1014 13:39:26.053446 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerName="dnsmasq-dns" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053466 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerName="dnsmasq-dns" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: E1014 13:39:26.053486 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-listener" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053497 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-listener" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: E1014 13:39:26.053517 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerName="init" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053527 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerName="init" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: E1014 13:39:26.053545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-notifier" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053556 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-notifier" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: E1014 13:39:26.053570 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-evaluator" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053577 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-evaluator" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: E1014 13:39:26.053599 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-api" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053607 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-api" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053807 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-notifier" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053821 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e0f72f-8c88-4297-a690-dd519cb22ec5" containerName="dnsmasq-dns" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053836 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-api" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053849 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-evaluator" Oct 14 13:39:26.055424 master-1 kubenswrapper[4740]: I1014 13:39:26.053866 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff845e7-62de-421e-80e6-e85408dc48be" containerName="aodh-listener" Oct 14 13:39:26.059208 master-1 kubenswrapper[4740]: I1014 13:39:26.056326 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 13:39:26.062287 master-1 kubenswrapper[4740]: I1014 13:39:26.060683 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 14 13:39:26.062287 master-1 kubenswrapper[4740]: I1014 13:39:26.061258 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 14 13:39:26.062287 master-1 kubenswrapper[4740]: I1014 13:39:26.061670 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 14 13:39:26.062929 master-1 kubenswrapper[4740]: I1014 13:39:26.062884 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 14 13:39:26.076855 master-1 kubenswrapper[4740]: I1014 13:39:26.076792 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 14 13:39:26.092206 master-1 kubenswrapper[4740]: I1014 13:39:26.088006 4740 scope.go:117] "RemoveContainer" containerID="5bac30b0bf2098c76e92a87b1a38be9c96fdd5009184f6401cb841effecb9d35" Oct 14 13:39:26.129049 master-1 kubenswrapper[4740]: I1014 13:39:26.128969 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-internal-tls-certs\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.129287 master-1 kubenswrapper[4740]: I1014 13:39:26.129073 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslj5\" (UniqueName: \"kubernetes.io/projected/1bcadf48-c86a-485e-8303-9c451959c34f-kube-api-access-qslj5\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.129634 master-1 kubenswrapper[4740]: I1014 13:39:26.129547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-config-data\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.129724 master-1 kubenswrapper[4740]: I1014 13:39:26.129691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-scripts\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.129762 master-1 kubenswrapper[4740]: I1014 13:39:26.129729 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-public-tls-certs\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.129797 master-1 kubenswrapper[4740]: I1014 13:39:26.129781 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.233326 master-1 kubenswrapper[4740]: I1014 13:39:26.232315 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-config-data\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.233326 master-1 kubenswrapper[4740]: I1014 13:39:26.232796 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-scripts\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.233326 master-1 kubenswrapper[4740]: I1014 13:39:26.232833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-public-tls-certs\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.233326 master-1 kubenswrapper[4740]: I1014 13:39:26.232864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.233326 master-1 kubenswrapper[4740]: I1014 13:39:26.232981 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-internal-tls-certs\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.233326 master-1 kubenswrapper[4740]: I1014 13:39:26.233058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslj5\" (UniqueName: \"kubernetes.io/projected/1bcadf48-c86a-485e-8303-9c451959c34f-kube-api-access-qslj5\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.240298 master-1 kubenswrapper[4740]: I1014 13:39:26.238672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-public-tls-certs\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.240298 master-1 kubenswrapper[4740]: I1014 13:39:26.238967 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-internal-tls-certs\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.240298 master-1 kubenswrapper[4740]: I1014 13:39:26.239385 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.240298 master-1 kubenswrapper[4740]: I1014 13:39:26.239504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-scripts\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.245129 master-1 kubenswrapper[4740]: I1014 13:39:26.245081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcadf48-c86a-485e-8303-9c451959c34f-config-data\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.265096 master-1 kubenswrapper[4740]: I1014 13:39:26.265053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslj5\" (UniqueName: \"kubernetes.io/projected/1bcadf48-c86a-485e-8303-9c451959c34f-kube-api-access-qslj5\") pod \"aodh-0\" (UID: \"1bcadf48-c86a-485e-8303-9c451959c34f\") " pod="openstack/aodh-0" Oct 14 13:39:26.416792 master-1 kubenswrapper[4740]: I1014 13:39:26.416713 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 14 13:39:26.918030 master-1 kubenswrapper[4740]: I1014 13:39:26.917925 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 14 13:39:26.929279 master-1 kubenswrapper[4740]: W1014 13:39:26.928978 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bcadf48_c86a_485e_8303_9c451959c34f.slice/crio-b1c500bc746911cca357ef50767a42d0e735fb985b4ed0065d999af32081f215 WatchSource:0}: Error finding container b1c500bc746911cca357ef50767a42d0e735fb985b4ed0065d999af32081f215: Status 404 returned error can't find the container with id b1c500bc746911cca357ef50767a42d0e735fb985b4ed0065d999af32081f215 Oct 14 13:39:26.932526 master-1 kubenswrapper[4740]: I1014 13:39:26.932491 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:39:26.962218 master-1 kubenswrapper[4740]: I1014 13:39:26.961794 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff845e7-62de-421e-80e6-e85408dc48be" path="/var/lib/kubelet/pods/fff845e7-62de-421e-80e6-e85408dc48be/volumes" Oct 14 13:39:26.962218 master-1 kubenswrapper[4740]: I1014 13:39:26.961914 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" exitCode=1 Oct 14 13:39:26.966389 master-1 kubenswrapper[4740]: I1014 13:39:26.964335 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bcadf48-c86a-485e-8303-9c451959c34f","Type":"ContainerStarted","Data":"b1c500bc746911cca357ef50767a42d0e735fb985b4ed0065d999af32081f215"} Oct 14 13:39:26.966389 master-1 kubenswrapper[4740]: I1014 13:39:26.964424 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7"} Oct 14 13:39:26.966389 master-1 kubenswrapper[4740]: I1014 13:39:26.964483 4740 scope.go:117] "RemoveContainer" containerID="1a03f380a9bb99fc2a70bbdf2f672ef321155d61ee65d8e0f84fad6350edbaf9" Oct 14 13:39:26.966389 master-1 kubenswrapper[4740]: I1014 13:39:26.965063 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:39:26.966389 master-1 kubenswrapper[4740]: E1014 13:39:26.965411 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:39:27.833662 master-1 kubenswrapper[4740]: I1014 13:39:27.833588 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-559c6f967c-vn2sd" Oct 14 13:39:28.115165 master-1 kubenswrapper[4740]: I1014 13:39:28.115102 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 14 13:39:28.115310 master-1 kubenswrapper[4740]: I1014 13:39:28.115181 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 14 13:39:29.001258 master-1 kubenswrapper[4740]: I1014 13:39:29.000953 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bcadf48-c86a-485e-8303-9c451959c34f","Type":"ContainerStarted","Data":"50a54914ae469bd9bcb5e36c8fdba43f79471340d30c7ecdfa25170646770be6"} Oct 14 13:39:29.001258 master-1 kubenswrapper[4740]: I1014 13:39:29.001001 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bcadf48-c86a-485e-8303-9c451959c34f","Type":"ContainerStarted","Data":"b71f6f9251acad45eecf063362df36d6a9cdee52d820b0f80361cf9674f89533"} Oct 14 13:39:29.137566 master-1 kubenswrapper[4740]: I1014 13:39:29.137468 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.0.183:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:29.137566 master-1 kubenswrapper[4740]: I1014 13:39:29.137541 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.0.183:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:30.013337 master-1 kubenswrapper[4740]: I1014 13:39:30.013279 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bcadf48-c86a-485e-8303-9c451959c34f","Type":"ContainerStarted","Data":"198d41f7f165aa6044efb5c3c64328a09f72c3fbed570eccac8918876b6b9db0"} Oct 14 13:39:31.027262 master-1 kubenswrapper[4740]: I1014 13:39:31.026722 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1bcadf48-c86a-485e-8303-9c451959c34f","Type":"ContainerStarted","Data":"d40572858742aea563ef65d181030caa4d0fe4217d3b3f2d590b74c0d0ba0fa6"} Oct 14 13:39:31.066981 master-1 kubenswrapper[4740]: I1014 13:39:31.066894 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.817557938 podStartE2EDuration="5.06687321s" podCreationTimestamp="2025-10-14 13:39:26 +0000 UTC" firstStartedPulling="2025-10-14 13:39:26.9324418 +0000 UTC m=+1992.742731139" lastFinishedPulling="2025-10-14 13:39:30.181757092 +0000 UTC m=+1995.992046411" observedRunningTime="2025-10-14 13:39:31.06272869 +0000 UTC m=+1996.873018029" watchObservedRunningTime="2025-10-14 13:39:31.06687321 +0000 UTC m=+1996.877162539" Oct 14 13:39:34.958463 master-1 kubenswrapper[4740]: I1014 13:39:34.958290 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:34.959446 master-1 kubenswrapper[4740]: I1014 13:39:34.958840 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-log" containerID="cri-o://dafe6baecdbb0069acb44c5aa6444b8034f4aabde4a93f6db8242f3f237e8ab1" gracePeriod=30 Oct 14 13:39:34.959446 master-1 kubenswrapper[4740]: I1014 13:39:34.959007 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-2" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-api" containerID="cri-o://c9c63ed106e6d9d59aac3dd870fcb8fa67c4219f86e433f7ff70e5e7a0b54645" gracePeriod=30 Oct 14 13:39:36.087947 master-1 kubenswrapper[4740]: I1014 13:39:36.087853 4740 generic.go:334] "Generic (PLEG): container finished" podID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerID="dafe6baecdbb0069acb44c5aa6444b8034f4aabde4a93f6db8242f3f237e8ab1" exitCode=143 Oct 14 13:39:36.087947 master-1 kubenswrapper[4740]: I1014 13:39:36.087926 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"0e1ed27e-52e0-4e0c-b5e0-7175f483e357","Type":"ContainerDied","Data":"dafe6baecdbb0069acb44c5aa6444b8034f4aabde4a93f6db8242f3f237e8ab1"} Oct 14 13:39:38.116864 master-1 kubenswrapper[4740]: I1014 13:39:38.116796 4740 generic.go:334] "Generic (PLEG): container finished" podID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerID="c9c63ed106e6d9d59aac3dd870fcb8fa67c4219f86e433f7ff70e5e7a0b54645" exitCode=0 Oct 14 13:39:38.117762 master-1 kubenswrapper[4740]: I1014 13:39:38.116886 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"0e1ed27e-52e0-4e0c-b5e0-7175f483e357","Type":"ContainerDied","Data":"c9c63ed106e6d9d59aac3dd870fcb8fa67c4219f86e433f7ff70e5e7a0b54645"} Oct 14 13:39:38.905363 master-1 kubenswrapper[4740]: I1014 13:39:38.905273 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:38.997350 master-1 kubenswrapper[4740]: I1014 13:39:38.995490 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-public-tls-certs\") pod \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " Oct 14 13:39:38.997350 master-1 kubenswrapper[4740]: I1014 13:39:38.996202 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-combined-ca-bundle\") pod \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " Oct 14 13:39:38.997350 master-1 kubenswrapper[4740]: I1014 13:39:38.996394 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-logs\") pod \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " Oct 14 13:39:38.997350 master-1 kubenswrapper[4740]: I1014 13:39:38.996529 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9hcp\" (UniqueName: \"kubernetes.io/projected/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-kube-api-access-p9hcp\") pod \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " Oct 14 13:39:38.997350 master-1 kubenswrapper[4740]: I1014 13:39:38.996613 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-config-data\") pod \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " Oct 14 13:39:38.997350 master-1 kubenswrapper[4740]: I1014 13:39:38.996735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-internal-tls-certs\") pod \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\" (UID: \"0e1ed27e-52e0-4e0c-b5e0-7175f483e357\") " Oct 14 13:39:38.998158 master-1 kubenswrapper[4740]: I1014 13:39:38.998104 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-logs" (OuterVolumeSpecName: "logs") pod "0e1ed27e-52e0-4e0c-b5e0-7175f483e357" (UID: "0e1ed27e-52e0-4e0c-b5e0-7175f483e357"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:39:39.003575 master-1 kubenswrapper[4740]: I1014 13:39:39.003533 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-kube-api-access-p9hcp" (OuterVolumeSpecName: "kube-api-access-p9hcp") pod "0e1ed27e-52e0-4e0c-b5e0-7175f483e357" (UID: "0e1ed27e-52e0-4e0c-b5e0-7175f483e357"). InnerVolumeSpecName "kube-api-access-p9hcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:39.028188 master-1 kubenswrapper[4740]: I1014 13:39:39.028067 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e1ed27e-52e0-4e0c-b5e0-7175f483e357" (UID: "0e1ed27e-52e0-4e0c-b5e0-7175f483e357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:39.030524 master-1 kubenswrapper[4740]: I1014 13:39:39.030491 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-config-data" (OuterVolumeSpecName: "config-data") pod "0e1ed27e-52e0-4e0c-b5e0-7175f483e357" (UID: "0e1ed27e-52e0-4e0c-b5e0-7175f483e357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:39.046087 master-1 kubenswrapper[4740]: I1014 13:39:39.046027 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0e1ed27e-52e0-4e0c-b5e0-7175f483e357" (UID: "0e1ed27e-52e0-4e0c-b5e0-7175f483e357"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:39.047836 master-1 kubenswrapper[4740]: I1014 13:39:39.047718 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e1ed27e-52e0-4e0c-b5e0-7175f483e357" (UID: "0e1ed27e-52e0-4e0c-b5e0-7175f483e357"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:39.099859 master-1 kubenswrapper[4740]: I1014 13:39:39.099783 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:39.099859 master-1 kubenswrapper[4740]: I1014 13:39:39.099834 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9hcp\" (UniqueName: \"kubernetes.io/projected/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-kube-api-access-p9hcp\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:39.099859 master-1 kubenswrapper[4740]: I1014 13:39:39.099847 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:39.099859 master-1 kubenswrapper[4740]: I1014 13:39:39.099858 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-internal-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:39.099859 master-1 kubenswrapper[4740]: I1014 13:39:39.099866 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-public-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:39.099859 master-1 kubenswrapper[4740]: I1014 13:39:39.099875 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1ed27e-52e0-4e0c-b5e0-7175f483e357-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:39.127887 master-1 kubenswrapper[4740]: I1014 13:39:39.127832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"0e1ed27e-52e0-4e0c-b5e0-7175f483e357","Type":"ContainerDied","Data":"9907f06576e79f163e9e402b8b62801fc2209887ed84522588ef477db508522c"} Oct 14 13:39:39.128341 master-1 kubenswrapper[4740]: I1014 13:39:39.127914 4740 scope.go:117] "RemoveContainer" containerID="c9c63ed106e6d9d59aac3dd870fcb8fa67c4219f86e433f7ff70e5e7a0b54645" Oct 14 13:39:39.128341 master-1 kubenswrapper[4740]: I1014 13:39:39.127919 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:39.147963 master-1 kubenswrapper[4740]: I1014 13:39:39.147894 4740 scope.go:117] "RemoveContainer" containerID="dafe6baecdbb0069acb44c5aa6444b8034f4aabde4a93f6db8242f3f237e8ab1" Oct 14 13:39:39.184948 master-1 kubenswrapper[4740]: I1014 13:39:39.181084 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:39.190984 master-1 kubenswrapper[4740]: I1014 13:39:39.190911 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:39.219328 master-1 kubenswrapper[4740]: I1014 13:39:39.219222 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:39.219752 master-1 kubenswrapper[4740]: E1014 13:39:39.219719 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-api" Oct 14 13:39:39.219752 master-1 kubenswrapper[4740]: I1014 13:39:39.219746 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-api" Oct 14 13:39:39.219836 master-1 kubenswrapper[4740]: E1014 13:39:39.219776 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-log" Oct 14 13:39:39.219836 master-1 kubenswrapper[4740]: I1014 13:39:39.219829 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-log" Oct 14 13:39:39.220071 master-1 kubenswrapper[4740]: I1014 13:39:39.220043 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-api" Oct 14 13:39:39.220071 master-1 kubenswrapper[4740]: I1014 13:39:39.220064 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" containerName="nova-api-log" Oct 14 13:39:39.221455 master-1 kubenswrapper[4740]: I1014 13:39:39.221420 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:39.224788 master-1 kubenswrapper[4740]: I1014 13:39:39.224537 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 13:39:39.225330 master-1 kubenswrapper[4740]: I1014 13:39:39.225276 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:39:39.225481 master-1 kubenswrapper[4740]: I1014 13:39:39.225343 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 13:39:39.258430 master-1 kubenswrapper[4740]: I1014 13:39:39.245900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:39.302995 master-1 kubenswrapper[4740]: I1014 13:39:39.302815 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-config-data\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.302995 master-1 kubenswrapper[4740]: I1014 13:39:39.302895 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0838b1-66d3-4049-98f2-13f5f943d0ab-logs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.303383 master-1 kubenswrapper[4740]: I1014 13:39:39.303209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-internal-tls-certs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.303383 master-1 kubenswrapper[4740]: I1014 13:39:39.303290 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75f7\" (UniqueName: \"kubernetes.io/projected/6c0838b1-66d3-4049-98f2-13f5f943d0ab-kube-api-access-t75f7\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.303694 master-1 kubenswrapper[4740]: I1014 13:39:39.303657 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-public-tls-certs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.303779 master-1 kubenswrapper[4740]: I1014 13:39:39.303752 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.406777 master-1 kubenswrapper[4740]: I1014 13:39:39.406556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75f7\" (UniqueName: \"kubernetes.io/projected/6c0838b1-66d3-4049-98f2-13f5f943d0ab-kube-api-access-t75f7\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.406777 master-1 kubenswrapper[4740]: I1014 13:39:39.406676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-public-tls-certs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.406777 master-1 kubenswrapper[4740]: I1014 13:39:39.406765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.408747 master-1 kubenswrapper[4740]: I1014 13:39:39.406891 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-config-data\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.408747 master-1 kubenswrapper[4740]: I1014 13:39:39.406959 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0838b1-66d3-4049-98f2-13f5f943d0ab-logs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.408747 master-1 kubenswrapper[4740]: I1014 13:39:39.407107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-internal-tls-certs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.408747 master-1 kubenswrapper[4740]: I1014 13:39:39.408183 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c0838b1-66d3-4049-98f2-13f5f943d0ab-logs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.411535 master-1 kubenswrapper[4740]: I1014 13:39:39.410791 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-config-data\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.411535 master-1 kubenswrapper[4740]: I1014 13:39:39.411440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-public-tls-certs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.411666 master-1 kubenswrapper[4740]: I1014 13:39:39.411492 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-combined-ca-bundle\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.413629 master-1 kubenswrapper[4740]: I1014 13:39:39.413579 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c0838b1-66d3-4049-98f2-13f5f943d0ab-internal-tls-certs\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.427374 master-1 kubenswrapper[4740]: I1014 13:39:39.427322 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75f7\" (UniqueName: \"kubernetes.io/projected/6c0838b1-66d3-4049-98f2-13f5f943d0ab-kube-api-access-t75f7\") pod \"nova-api-2\" (UID: \"6c0838b1-66d3-4049-98f2-13f5f943d0ab\") " pod="openstack/nova-api-2" Oct 14 13:39:39.543860 master-1 kubenswrapper[4740]: I1014 13:39:39.543794 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2" Oct 14 13:39:40.092478 master-1 kubenswrapper[4740]: W1014 13:39:40.092425 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c0838b1_66d3_4049_98f2_13f5f943d0ab.slice/crio-d8cf761d3ed95fb57c002bdf92f5eea27680efc66250bb8377288e8d3416121f WatchSource:0}: Error finding container d8cf761d3ed95fb57c002bdf92f5eea27680efc66250bb8377288e8d3416121f: Status 404 returned error can't find the container with id d8cf761d3ed95fb57c002bdf92f5eea27680efc66250bb8377288e8d3416121f Oct 14 13:39:40.093497 master-1 kubenswrapper[4740]: I1014 13:39:40.093423 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2"] Oct 14 13:39:40.143212 master-1 kubenswrapper[4740]: I1014 13:39:40.143158 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"6c0838b1-66d3-4049-98f2-13f5f943d0ab","Type":"ContainerStarted","Data":"d8cf761d3ed95fb57c002bdf92f5eea27680efc66250bb8377288e8d3416121f"} Oct 14 13:39:40.944276 master-1 kubenswrapper[4740]: I1014 13:39:40.944174 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:39:40.944799 master-1 kubenswrapper[4740]: E1014 13:39:40.944706 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:39:40.955170 master-1 kubenswrapper[4740]: I1014 13:39:40.955083 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1ed27e-52e0-4e0c-b5e0-7175f483e357" path="/var/lib/kubelet/pods/0e1ed27e-52e0-4e0c-b5e0-7175f483e357/volumes" Oct 14 13:39:41.152481 master-1 kubenswrapper[4740]: I1014 13:39:41.152423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"6c0838b1-66d3-4049-98f2-13f5f943d0ab","Type":"ContainerStarted","Data":"e29f95d0b9cab38f4d67d66897ede3520178bc7d1fba0ab270e259c92ee4158e"} Oct 14 13:39:41.152481 master-1 kubenswrapper[4740]: I1014 13:39:41.152486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2" event={"ID":"6c0838b1-66d3-4049-98f2-13f5f943d0ab","Type":"ContainerStarted","Data":"c1a25e39ac144ebff5a68eab1bb675715ee83a9c4ab1838313388edcb93f47eb"} Oct 14 13:39:41.191425 master-1 kubenswrapper[4740]: I1014 13:39:41.191323 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2" podStartSLOduration=2.191289417 podStartE2EDuration="2.191289417s" podCreationTimestamp="2025-10-14 13:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:39:41.176950568 +0000 UTC m=+2006.987239907" watchObservedRunningTime="2025-10-14 13:39:41.191289417 +0000 UTC m=+2007.001578776" Oct 14 13:39:45.796517 master-1 kubenswrapper[4740]: I1014 13:39:45.796404 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-2" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:45.797497 master-1 kubenswrapper[4740]: I1014 13:39:45.796496 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-2" podUID="2e7b8c63-7f9e-4b06-8b7f-fdc199d0818a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:49.545147 master-1 kubenswrapper[4740]: I1014 13:39:49.545067 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 14 13:39:49.545703 master-1 kubenswrapper[4740]: I1014 13:39:49.545173 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-2" Oct 14 13:39:50.061364 master-1 kubenswrapper[4740]: I1014 13:39:50.061296 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:39:50.061634 master-1 kubenswrapper[4740]: I1014 13:39:50.061597 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-1" podUID="4ba8d44d-71cf-454e-84fc-50f5c917f079" containerName="nova-scheduler-scheduler" containerID="cri-o://33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe" gracePeriod=30 Oct 14 13:39:50.561658 master-1 kubenswrapper[4740]: I1014 13:39:50.561559 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="6c0838b1-66d3-4049-98f2-13f5f943d0ab" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:50.562408 master-1 kubenswrapper[4740]: I1014 13:39:50.561936 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-2" podUID="6c0838b1-66d3-4049-98f2-13f5f943d0ab" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.0.185:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:39:50.954876 master-1 kubenswrapper[4740]: E1014 13:39:50.953366 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 13:39:50.957421 master-1 kubenswrapper[4740]: E1014 13:39:50.957370 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 13:39:50.960685 master-1 kubenswrapper[4740]: E1014 13:39:50.960566 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 13:39:50.960685 master-1 kubenswrapper[4740]: E1014 13:39:50.960681 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-1" podUID="4ba8d44d-71cf-454e-84fc-50f5c917f079" containerName="nova-scheduler-scheduler" Oct 14 13:39:53.944851 master-1 kubenswrapper[4740]: I1014 13:39:53.944768 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:39:53.945984 master-1 kubenswrapper[4740]: E1014 13:39:53.945276 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:39:55.325986 master-1 kubenswrapper[4740]: I1014 13:39:55.325909 4740 generic.go:334] "Generic (PLEG): container finished" podID="4ba8d44d-71cf-454e-84fc-50f5c917f079" containerID="33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe" exitCode=0 Oct 14 13:39:55.325986 master-1 kubenswrapper[4740]: I1014 13:39:55.325970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4ba8d44d-71cf-454e-84fc-50f5c917f079","Type":"ContainerDied","Data":"33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe"} Oct 14 13:39:55.549127 master-1 kubenswrapper[4740]: I1014 13:39:55.549031 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:39:55.592399 master-1 kubenswrapper[4740]: I1014 13:39:55.592200 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-config-data\") pod \"4ba8d44d-71cf-454e-84fc-50f5c917f079\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " Oct 14 13:39:55.592399 master-1 kubenswrapper[4740]: I1014 13:39:55.592331 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq2zk\" (UniqueName: \"kubernetes.io/projected/4ba8d44d-71cf-454e-84fc-50f5c917f079-kube-api-access-qq2zk\") pod \"4ba8d44d-71cf-454e-84fc-50f5c917f079\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " Oct 14 13:39:55.592399 master-1 kubenswrapper[4740]: I1014 13:39:55.592383 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-combined-ca-bundle\") pod \"4ba8d44d-71cf-454e-84fc-50f5c917f079\" (UID: \"4ba8d44d-71cf-454e-84fc-50f5c917f079\") " Oct 14 13:39:55.596121 master-1 kubenswrapper[4740]: I1014 13:39:55.596049 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba8d44d-71cf-454e-84fc-50f5c917f079-kube-api-access-qq2zk" (OuterVolumeSpecName: "kube-api-access-qq2zk") pod "4ba8d44d-71cf-454e-84fc-50f5c917f079" (UID: "4ba8d44d-71cf-454e-84fc-50f5c917f079"). InnerVolumeSpecName "kube-api-access-qq2zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:39:55.633944 master-1 kubenswrapper[4740]: I1014 13:39:55.633849 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-config-data" (OuterVolumeSpecName: "config-data") pod "4ba8d44d-71cf-454e-84fc-50f5c917f079" (UID: "4ba8d44d-71cf-454e-84fc-50f5c917f079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:55.657954 master-1 kubenswrapper[4740]: I1014 13:39:55.657871 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ba8d44d-71cf-454e-84fc-50f5c917f079" (UID: "4ba8d44d-71cf-454e-84fc-50f5c917f079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:39:55.695117 master-1 kubenswrapper[4740]: I1014 13:39:55.695042 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:55.695117 master-1 kubenswrapper[4740]: I1014 13:39:55.695089 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq2zk\" (UniqueName: \"kubernetes.io/projected/4ba8d44d-71cf-454e-84fc-50f5c917f079-kube-api-access-qq2zk\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:55.695117 master-1 kubenswrapper[4740]: I1014 13:39:55.695103 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8d44d-71cf-454e-84fc-50f5c917f079-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:39:56.341140 master-1 kubenswrapper[4740]: I1014 13:39:56.341003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"4ba8d44d-71cf-454e-84fc-50f5c917f079","Type":"ContainerDied","Data":"5cacab3b1e65ce4b412affd246b28a675956ee2d06ecba672429ac8f2e964de9"} Oct 14 13:39:56.341140 master-1 kubenswrapper[4740]: I1014 13:39:56.341114 4740 scope.go:117] "RemoveContainer" containerID="33dcc8d4184f2bda4e881b9aabb27b9223fcfe23ccafd2e5bec64e239fe9afbe" Oct 14 13:39:56.342688 master-1 kubenswrapper[4740]: I1014 13:39:56.341109 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:39:56.410744 master-1 kubenswrapper[4740]: I1014 13:39:56.410660 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:39:56.423053 master-1 kubenswrapper[4740]: I1014 13:39:56.422996 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:39:56.449504 master-1 kubenswrapper[4740]: I1014 13:39:56.447765 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:39:56.449504 master-1 kubenswrapper[4740]: E1014 13:39:56.448219 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba8d44d-71cf-454e-84fc-50f5c917f079" containerName="nova-scheduler-scheduler" Oct 14 13:39:56.449504 master-1 kubenswrapper[4740]: I1014 13:39:56.448267 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba8d44d-71cf-454e-84fc-50f5c917f079" containerName="nova-scheduler-scheduler" Oct 14 13:39:56.449504 master-1 kubenswrapper[4740]: I1014 13:39:56.448491 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba8d44d-71cf-454e-84fc-50f5c917f079" containerName="nova-scheduler-scheduler" Oct 14 13:39:56.449504 master-1 kubenswrapper[4740]: I1014 13:39:56.449272 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:39:56.453745 master-1 kubenswrapper[4740]: I1014 13:39:56.453650 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 13:39:56.469552 master-1 kubenswrapper[4740]: I1014 13:39:56.469486 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:39:56.512965 master-1 kubenswrapper[4740]: I1014 13:39:56.512904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-config-data\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.513331 master-1 kubenswrapper[4740]: I1014 13:39:56.513282 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.513531 master-1 kubenswrapper[4740]: I1014 13:39:56.513514 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l96wz\" (UniqueName: \"kubernetes.io/projected/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-kube-api-access-l96wz\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.616000 master-1 kubenswrapper[4740]: I1014 13:39:56.615885 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-config-data\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.616273 master-1 kubenswrapper[4740]: I1014 13:39:56.616254 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.616396 master-1 kubenswrapper[4740]: I1014 13:39:56.616382 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l96wz\" (UniqueName: \"kubernetes.io/projected/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-kube-api-access-l96wz\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.620646 master-1 kubenswrapper[4740]: I1014 13:39:56.620589 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-combined-ca-bundle\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.622130 master-1 kubenswrapper[4740]: I1014 13:39:56.622069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-config-data\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.638629 master-1 kubenswrapper[4740]: I1014 13:39:56.638562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l96wz\" (UniqueName: \"kubernetes.io/projected/9abd9470-eb6c-4fa4-817c-2fa587f6ca02-kube-api-access-l96wz\") pod \"nova-scheduler-1\" (UID: \"9abd9470-eb6c-4fa4-817c-2fa587f6ca02\") " pod="openstack/nova-scheduler-1" Oct 14 13:39:56.794114 master-1 kubenswrapper[4740]: I1014 13:39:56.793978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-1" Oct 14 13:39:56.961399 master-1 kubenswrapper[4740]: I1014 13:39:56.961288 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba8d44d-71cf-454e-84fc-50f5c917f079" path="/var/lib/kubelet/pods/4ba8d44d-71cf-454e-84fc-50f5c917f079/volumes" Oct 14 13:39:57.320546 master-1 kubenswrapper[4740]: W1014 13:39:57.320448 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9abd9470_eb6c_4fa4_817c_2fa587f6ca02.slice/crio-b0cff5f39873f64f11e568fca7ea26897deb82612cf57311434050e0084c633f WatchSource:0}: Error finding container b0cff5f39873f64f11e568fca7ea26897deb82612cf57311434050e0084c633f: Status 404 returned error can't find the container with id b0cff5f39873f64f11e568fca7ea26897deb82612cf57311434050e0084c633f Oct 14 13:39:57.322979 master-1 kubenswrapper[4740]: I1014 13:39:57.322926 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-1"] Oct 14 13:39:57.358320 master-1 kubenswrapper[4740]: I1014 13:39:57.358200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"9abd9470-eb6c-4fa4-817c-2fa587f6ca02","Type":"ContainerStarted","Data":"b0cff5f39873f64f11e568fca7ea26897deb82612cf57311434050e0084c633f"} Oct 14 13:39:58.391533 master-1 kubenswrapper[4740]: I1014 13:39:58.391391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-1" event={"ID":"9abd9470-eb6c-4fa4-817c-2fa587f6ca02","Type":"ContainerStarted","Data":"6db46f704404aa7a40296a2ccc9646444819174727c6804b54bb2de564f32b58"} Oct 14 13:39:58.428791 master-1 kubenswrapper[4740]: I1014 13:39:58.428689 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-1" podStartSLOduration=2.428660651 podStartE2EDuration="2.428660651s" podCreationTimestamp="2025-10-14 13:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:39:58.417440434 +0000 UTC m=+2024.227729793" watchObservedRunningTime="2025-10-14 13:39:58.428660651 +0000 UTC m=+2024.238950020" Oct 14 13:39:59.554820 master-1 kubenswrapper[4740]: I1014 13:39:59.554734 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 14 13:39:59.556064 master-1 kubenswrapper[4740]: I1014 13:39:59.555311 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 14 13:39:59.557341 master-1 kubenswrapper[4740]: I1014 13:39:59.557283 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-2" Oct 14 13:39:59.563714 master-1 kubenswrapper[4740]: I1014 13:39:59.563650 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 14 13:40:00.144444 master-1 kubenswrapper[4740]: I1014 13:40:00.144326 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:40:00.144805 master-1 kubenswrapper[4740]: I1014 13:40:00.144728 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-log" containerID="cri-o://9ea58968249d52450e1ea1f1a4cdbcf459bdfa17a8c2c12c971de75a7ca16b7e" gracePeriod=30 Oct 14 13:40:00.145022 master-1 kubenswrapper[4740]: I1014 13:40:00.144920 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-1" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-metadata" containerID="cri-o://2955006b356315d3247efbd601e1d531451e33a4defb0d38baa1bd4af2a10d6a" gracePeriod=30 Oct 14 13:40:00.419714 master-1 kubenswrapper[4740]: I1014 13:40:00.419582 4740 generic.go:334] "Generic (PLEG): container finished" podID="034d010b-5277-4cbe-b908-94fef09db25d" containerID="9ea58968249d52450e1ea1f1a4cdbcf459bdfa17a8c2c12c971de75a7ca16b7e" exitCode=143 Oct 14 13:40:00.419917 master-1 kubenswrapper[4740]: I1014 13:40:00.419791 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"034d010b-5277-4cbe-b908-94fef09db25d","Type":"ContainerDied","Data":"9ea58968249d52450e1ea1f1a4cdbcf459bdfa17a8c2c12c971de75a7ca16b7e"} Oct 14 13:40:00.419917 master-1 kubenswrapper[4740]: I1014 13:40:00.419907 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-2" Oct 14 13:40:00.432828 master-1 kubenswrapper[4740]: I1014 13:40:00.432714 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-2" Oct 14 13:40:01.794261 master-1 kubenswrapper[4740]: I1014 13:40:01.794153 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-1" Oct 14 13:40:03.452166 master-1 kubenswrapper[4740]: I1014 13:40:03.452034 4740 generic.go:334] "Generic (PLEG): container finished" podID="034d010b-5277-4cbe-b908-94fef09db25d" containerID="2955006b356315d3247efbd601e1d531451e33a4defb0d38baa1bd4af2a10d6a" exitCode=0 Oct 14 13:40:03.452166 master-1 kubenswrapper[4740]: I1014 13:40:03.452143 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"034d010b-5277-4cbe-b908-94fef09db25d","Type":"ContainerDied","Data":"2955006b356315d3247efbd601e1d531451e33a4defb0d38baa1bd4af2a10d6a"} Oct 14 13:40:04.067439 master-1 kubenswrapper[4740]: I1014 13:40:04.067115 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:40:04.201272 master-1 kubenswrapper[4740]: I1014 13:40:04.201150 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-combined-ca-bundle\") pod \"034d010b-5277-4cbe-b908-94fef09db25d\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " Oct 14 13:40:04.201272 master-1 kubenswrapper[4740]: I1014 13:40:04.201288 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034d010b-5277-4cbe-b908-94fef09db25d-logs\") pod \"034d010b-5277-4cbe-b908-94fef09db25d\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " Oct 14 13:40:04.201777 master-1 kubenswrapper[4740]: I1014 13:40:04.201336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-config-data\") pod \"034d010b-5277-4cbe-b908-94fef09db25d\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " Oct 14 13:40:04.201777 master-1 kubenswrapper[4740]: I1014 13:40:04.201449 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-nova-metadata-tls-certs\") pod \"034d010b-5277-4cbe-b908-94fef09db25d\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " Oct 14 13:40:04.201777 master-1 kubenswrapper[4740]: I1014 13:40:04.201662 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgxhq\" (UniqueName: \"kubernetes.io/projected/034d010b-5277-4cbe-b908-94fef09db25d-kube-api-access-qgxhq\") pod \"034d010b-5277-4cbe-b908-94fef09db25d\" (UID: \"034d010b-5277-4cbe-b908-94fef09db25d\") " Oct 14 13:40:04.201915 master-1 kubenswrapper[4740]: I1014 13:40:04.201859 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/034d010b-5277-4cbe-b908-94fef09db25d-logs" (OuterVolumeSpecName: "logs") pod "034d010b-5277-4cbe-b908-94fef09db25d" (UID: "034d010b-5277-4cbe-b908-94fef09db25d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:40:04.203578 master-1 kubenswrapper[4740]: I1014 13:40:04.203512 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/034d010b-5277-4cbe-b908-94fef09db25d-logs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:40:04.208656 master-1 kubenswrapper[4740]: I1014 13:40:04.208561 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034d010b-5277-4cbe-b908-94fef09db25d-kube-api-access-qgxhq" (OuterVolumeSpecName: "kube-api-access-qgxhq") pod "034d010b-5277-4cbe-b908-94fef09db25d" (UID: "034d010b-5277-4cbe-b908-94fef09db25d"). InnerVolumeSpecName "kube-api-access-qgxhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:40:04.249676 master-1 kubenswrapper[4740]: I1014 13:40:04.249593 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "034d010b-5277-4cbe-b908-94fef09db25d" (UID: "034d010b-5277-4cbe-b908-94fef09db25d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:40:04.251474 master-1 kubenswrapper[4740]: I1014 13:40:04.251406 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-config-data" (OuterVolumeSpecName: "config-data") pod "034d010b-5277-4cbe-b908-94fef09db25d" (UID: "034d010b-5277-4cbe-b908-94fef09db25d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:40:04.280790 master-1 kubenswrapper[4740]: I1014 13:40:04.280699 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "034d010b-5277-4cbe-b908-94fef09db25d" (UID: "034d010b-5277-4cbe-b908-94fef09db25d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:40:04.305070 master-1 kubenswrapper[4740]: I1014 13:40:04.305013 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgxhq\" (UniqueName: \"kubernetes.io/projected/034d010b-5277-4cbe-b908-94fef09db25d-kube-api-access-qgxhq\") on node \"master-1\" DevicePath \"\"" Oct 14 13:40:04.305070 master-1 kubenswrapper[4740]: I1014 13:40:04.305057 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:40:04.305070 master-1 kubenswrapper[4740]: I1014 13:40:04.305071 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:40:04.305070 master-1 kubenswrapper[4740]: I1014 13:40:04.305085 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/034d010b-5277-4cbe-b908-94fef09db25d-nova-metadata-tls-certs\") on node \"master-1\" DevicePath \"\"" Oct 14 13:40:04.466500 master-1 kubenswrapper[4740]: I1014 13:40:04.466430 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"034d010b-5277-4cbe-b908-94fef09db25d","Type":"ContainerDied","Data":"7b1f3b7ebf468e68da3fefa3cbd625f578a3852318493dfc9bbfbcbae6780bf1"} Oct 14 13:40:04.467192 master-1 kubenswrapper[4740]: I1014 13:40:04.466521 4740 scope.go:117] "RemoveContainer" containerID="2955006b356315d3247efbd601e1d531451e33a4defb0d38baa1bd4af2a10d6a" Oct 14 13:40:04.467192 master-1 kubenswrapper[4740]: I1014 13:40:04.466537 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:40:04.508434 master-1 kubenswrapper[4740]: I1014 13:40:04.508147 4740 scope.go:117] "RemoveContainer" containerID="9ea58968249d52450e1ea1f1a4cdbcf459bdfa17a8c2c12c971de75a7ca16b7e" Oct 14 13:40:04.518078 master-1 kubenswrapper[4740]: I1014 13:40:04.517991 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:40:04.532306 master-1 kubenswrapper[4740]: I1014 13:40:04.531339 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:40:04.558621 master-1 kubenswrapper[4740]: I1014 13:40:04.558559 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:40:04.558964 master-1 kubenswrapper[4740]: E1014 13:40:04.558940 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-metadata" Oct 14 13:40:04.558964 master-1 kubenswrapper[4740]: I1014 13:40:04.558962 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-metadata" Oct 14 13:40:04.558964 master-1 kubenswrapper[4740]: E1014 13:40:04.558978 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-log" Oct 14 13:40:04.558964 master-1 kubenswrapper[4740]: I1014 13:40:04.558985 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-log" Oct 14 13:40:04.559187 master-1 kubenswrapper[4740]: I1014 13:40:04.559179 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-log" Oct 14 13:40:04.559315 master-1 kubenswrapper[4740]: I1014 13:40:04.559197 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-metadata" Oct 14 13:40:04.562305 master-1 kubenswrapper[4740]: I1014 13:40:04.562253 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:40:04.564528 master-1 kubenswrapper[4740]: I1014 13:40:04.564496 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:40:04.564731 master-1 kubenswrapper[4740]: I1014 13:40:04.564707 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 13:40:04.567284 master-1 kubenswrapper[4740]: I1014 13:40:04.567244 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:40:04.612265 master-1 kubenswrapper[4740]: I1014 13:40:04.612148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrqn\" (UniqueName: \"kubernetes.io/projected/b311d350-8cd1-43cd-857d-d22df59cc9d4-kube-api-access-kxrqn\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.612265 master-1 kubenswrapper[4740]: I1014 13:40:04.612225 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b311d350-8cd1-43cd-857d-d22df59cc9d4-logs\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.612636 master-1 kubenswrapper[4740]: I1014 13:40:04.612378 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-config-data\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.612636 master-1 kubenswrapper[4740]: I1014 13:40:04.612433 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.613480 master-1 kubenswrapper[4740]: I1014 13:40:04.613399 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.723570 master-1 kubenswrapper[4740]: I1014 13:40:04.723213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.723570 master-1 kubenswrapper[4740]: I1014 13:40:04.723531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrqn\" (UniqueName: \"kubernetes.io/projected/b311d350-8cd1-43cd-857d-d22df59cc9d4-kube-api-access-kxrqn\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.724490 master-1 kubenswrapper[4740]: I1014 13:40:04.723625 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b311d350-8cd1-43cd-857d-d22df59cc9d4-logs\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.724490 master-1 kubenswrapper[4740]: I1014 13:40:04.723721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-config-data\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.724490 master-1 kubenswrapper[4740]: I1014 13:40:04.723776 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.724490 master-1 kubenswrapper[4740]: I1014 13:40:04.724207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b311d350-8cd1-43cd-857d-d22df59cc9d4-logs\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.728140 master-1 kubenswrapper[4740]: I1014 13:40:04.728098 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-config-data\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.728268 master-1 kubenswrapper[4740]: I1014 13:40:04.728091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-nova-metadata-tls-certs\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.729153 master-1 kubenswrapper[4740]: I1014 13:40:04.729081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b311d350-8cd1-43cd-857d-d22df59cc9d4-combined-ca-bundle\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.748301 master-1 kubenswrapper[4740]: I1014 13:40:04.748204 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrqn\" (UniqueName: \"kubernetes.io/projected/b311d350-8cd1-43cd-857d-d22df59cc9d4-kube-api-access-kxrqn\") pod \"nova-metadata-1\" (UID: \"b311d350-8cd1-43cd-857d-d22df59cc9d4\") " pod="openstack/nova-metadata-1" Oct 14 13:40:04.881273 master-1 kubenswrapper[4740]: I1014 13:40:04.881194 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-1" Oct 14 13:40:04.960212 master-1 kubenswrapper[4740]: I1014 13:40:04.960128 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034d010b-5277-4cbe-b908-94fef09db25d" path="/var/lib/kubelet/pods/034d010b-5277-4cbe-b908-94fef09db25d/volumes" Oct 14 13:40:05.380777 master-1 kubenswrapper[4740]: W1014 13:40:05.380695 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb311d350_8cd1_43cd_857d_d22df59cc9d4.slice/crio-aac2927311fdac92a4048095b3116d90c0dc3218719d382eb13410f6953ccb16 WatchSource:0}: Error finding container aac2927311fdac92a4048095b3116d90c0dc3218719d382eb13410f6953ccb16: Status 404 returned error can't find the container with id aac2927311fdac92a4048095b3116d90c0dc3218719d382eb13410f6953ccb16 Oct 14 13:40:05.389910 master-1 kubenswrapper[4740]: I1014 13:40:05.389858 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-1"] Oct 14 13:40:05.482908 master-1 kubenswrapper[4740]: I1014 13:40:05.482809 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"b311d350-8cd1-43cd-857d-d22df59cc9d4","Type":"ContainerStarted","Data":"aac2927311fdac92a4048095b3116d90c0dc3218719d382eb13410f6953ccb16"} Oct 14 13:40:06.506378 master-1 kubenswrapper[4740]: I1014 13:40:06.506306 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"b311d350-8cd1-43cd-857d-d22df59cc9d4","Type":"ContainerStarted","Data":"e2cd70601905eb06252b0c2dd1d12ac5c1eaa65d3c8986c9848afbc44fff75cb"} Oct 14 13:40:06.506378 master-1 kubenswrapper[4740]: I1014 13:40:06.506383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-1" event={"ID":"b311d350-8cd1-43cd-857d-d22df59cc9d4","Type":"ContainerStarted","Data":"4f35fd5c74b48f8315e8138bc6c4050dc8a06894f9ad19201b81544365aef0af"} Oct 14 13:40:06.554051 master-1 kubenswrapper[4740]: I1014 13:40:06.553382 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-1" podStartSLOduration=2.5533577320000003 podStartE2EDuration="2.553357732s" podCreationTimestamp="2025-10-14 13:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:40:06.539661549 +0000 UTC m=+2032.349950938" watchObservedRunningTime="2025-10-14 13:40:06.553357732 +0000 UTC m=+2032.363647071" Oct 14 13:40:06.795305 master-1 kubenswrapper[4740]: I1014 13:40:06.795056 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-1" Oct 14 13:40:06.825129 master-1 kubenswrapper[4740]: I1014 13:40:06.825064 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-1" Oct 14 13:40:07.552893 master-1 kubenswrapper[4740]: I1014 13:40:07.552805 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-1" Oct 14 13:40:08.945189 master-1 kubenswrapper[4740]: I1014 13:40:08.945062 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:40:08.946197 master-1 kubenswrapper[4740]: E1014 13:40:08.945658 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:40:09.055448 master-1 kubenswrapper[4740]: I1014 13:40:09.055339 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-1" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.180:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:40:09.055448 master-1 kubenswrapper[4740]: I1014 13:40:09.055383 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-1" podUID="034d010b-5277-4cbe-b908-94fef09db25d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.180:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:40:09.882086 master-1 kubenswrapper[4740]: I1014 13:40:09.881986 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 14 13:40:09.882440 master-1 kubenswrapper[4740]: I1014 13:40:09.882109 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-1" Oct 14 13:40:14.881622 master-1 kubenswrapper[4740]: I1014 13:40:14.881431 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 14 13:40:14.881622 master-1 kubenswrapper[4740]: I1014 13:40:14.881558 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-1" Oct 14 13:40:15.898462 master-1 kubenswrapper[4740]: I1014 13:40:15.898358 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="b311d350-8cd1-43cd-857d-d22df59cc9d4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:40:15.898462 master-1 kubenswrapper[4740]: I1014 13:40:15.898385 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-1" podUID="b311d350-8cd1-43cd-857d-d22df59cc9d4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.187:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:40:21.943560 master-1 kubenswrapper[4740]: I1014 13:40:21.943484 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:40:21.944716 master-1 kubenswrapper[4740]: E1014 13:40:21.943782 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:40:24.887472 master-1 kubenswrapper[4740]: I1014 13:40:24.887389 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 14 13:40:24.892096 master-1 kubenswrapper[4740]: I1014 13:40:24.892026 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-1" Oct 14 13:40:24.895190 master-1 kubenswrapper[4740]: I1014 13:40:24.895134 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 14 13:40:25.722435 master-1 kubenswrapper[4740]: I1014 13:40:25.722368 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-1" Oct 14 13:40:34.951980 master-1 kubenswrapper[4740]: I1014 13:40:34.951911 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:40:34.952724 master-1 kubenswrapper[4740]: E1014 13:40:34.952597 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:40:48.944607 master-1 kubenswrapper[4740]: I1014 13:40:48.944532 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:40:48.945476 master-1 kubenswrapper[4740]: E1014 13:40:48.944906 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:41:01.944985 master-1 kubenswrapper[4740]: I1014 13:41:01.944896 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:41:01.945593 master-1 kubenswrapper[4740]: E1014 13:41:01.945337 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:41:16.944368 master-1 kubenswrapper[4740]: I1014 13:41:16.944297 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:41:16.945692 master-1 kubenswrapper[4740]: E1014 13:41:16.944563 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:41:18.182144 master-1 kubenswrapper[4740]: I1014 13:41:18.182062 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-hz8zc"] Oct 14 13:41:18.183484 master-1 kubenswrapper[4740]: I1014 13:41:18.183449 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.187178 master-1 kubenswrapper[4740]: I1014 13:41:18.187108 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Oct 14 13:41:18.187357 master-1 kubenswrapper[4740]: I1014 13:41:18.187280 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Oct 14 13:41:18.187476 master-1 kubenswrapper[4740]: I1014 13:41:18.187432 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Oct 14 13:41:18.205606 master-1 kubenswrapper[4740]: I1014 13:41:18.205540 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-hz8zc"] Oct 14 13:41:18.290525 master-1 kubenswrapper[4740]: I1014 13:41:18.290429 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c8693286-491f-4a3f-aff4-66a0e160cf32-hm-ports\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.290750 master-1 kubenswrapper[4740]: I1014 13:41:18.290623 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8693286-491f-4a3f-aff4-66a0e160cf32-scripts\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.290750 master-1 kubenswrapper[4740]: I1014 13:41:18.290679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8693286-491f-4a3f-aff4-66a0e160cf32-config-data\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.290750 master-1 kubenswrapper[4740]: I1014 13:41:18.290736 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c8693286-491f-4a3f-aff4-66a0e160cf32-config-data-merged\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.395341 master-1 kubenswrapper[4740]: I1014 13:41:18.395244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8693286-491f-4a3f-aff4-66a0e160cf32-config-data\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.395692 master-1 kubenswrapper[4740]: I1014 13:41:18.395417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c8693286-491f-4a3f-aff4-66a0e160cf32-config-data-merged\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.395692 master-1 kubenswrapper[4740]: I1014 13:41:18.395673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c8693286-491f-4a3f-aff4-66a0e160cf32-hm-ports\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.395888 master-1 kubenswrapper[4740]: I1014 13:41:18.395864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8693286-491f-4a3f-aff4-66a0e160cf32-scripts\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.396335 master-1 kubenswrapper[4740]: I1014 13:41:18.396215 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c8693286-491f-4a3f-aff4-66a0e160cf32-config-data-merged\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.397376 master-1 kubenswrapper[4740]: I1014 13:41:18.397348 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c8693286-491f-4a3f-aff4-66a0e160cf32-hm-ports\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.400356 master-1 kubenswrapper[4740]: I1014 13:41:18.400322 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8693286-491f-4a3f-aff4-66a0e160cf32-config-data\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.410427 master-1 kubenswrapper[4740]: I1014 13:41:18.410356 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8693286-491f-4a3f-aff4-66a0e160cf32-scripts\") pod \"octavia-rsyslog-hz8zc\" (UID: \"c8693286-491f-4a3f-aff4-66a0e160cf32\") " pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:18.499799 master-1 kubenswrapper[4740]: I1014 13:41:18.499724 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:19.989880 master-1 kubenswrapper[4740]: I1014 13:41:19.989810 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-hz8zc"] Oct 14 13:41:20.353211 master-1 kubenswrapper[4740]: I1014 13:41:20.353131 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-hz8zc" event={"ID":"c8693286-491f-4a3f-aff4-66a0e160cf32","Type":"ContainerStarted","Data":"d98a8da22a0881bab882212c92fba3ea1bb14aafe75ec0a91e80e50f4ca1e693"} Oct 14 13:41:26.850379 master-1 kubenswrapper[4740]: I1014 13:41:26.850183 4740 scope.go:117] "RemoveContainer" containerID="8c1f1d0eaa9bf84d9707b573632a457b76ed1bb1933e088a18e8ca6ccddf7b3d" Oct 14 13:41:28.454315 master-1 kubenswrapper[4740]: I1014 13:41:28.454134 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-hz8zc" event={"ID":"c8693286-491f-4a3f-aff4-66a0e160cf32","Type":"ContainerStarted","Data":"12f96bf38519e3a4b9e769520c91532ddd6c4c49e29022b89baa58c266c02def"} Oct 14 13:41:29.945519 master-1 kubenswrapper[4740]: I1014 13:41:29.945426 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:41:29.946530 master-1 kubenswrapper[4740]: E1014 13:41:29.945899 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:41:30.480166 master-1 kubenswrapper[4740]: I1014 13:41:30.480123 4740 generic.go:334] "Generic (PLEG): container finished" podID="c8693286-491f-4a3f-aff4-66a0e160cf32" containerID="12f96bf38519e3a4b9e769520c91532ddd6c4c49e29022b89baa58c266c02def" exitCode=0 Oct 14 13:41:30.480464 master-1 kubenswrapper[4740]: I1014 13:41:30.480241 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-hz8zc" event={"ID":"c8693286-491f-4a3f-aff4-66a0e160cf32","Type":"ContainerDied","Data":"12f96bf38519e3a4b9e769520c91532ddd6c4c49e29022b89baa58c266c02def"} Oct 14 13:41:32.505135 master-1 kubenswrapper[4740]: I1014 13:41:32.505042 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-hz8zc" event={"ID":"c8693286-491f-4a3f-aff4-66a0e160cf32","Type":"ContainerStarted","Data":"46f9e9a4aebbd149b6c53c1ba18a92631783e648ab663337d2a0a3f11bb0785d"} Oct 14 13:41:32.506007 master-1 kubenswrapper[4740]: I1014 13:41:32.505435 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:32.542437 master-1 kubenswrapper[4740]: I1014 13:41:32.542343 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-hz8zc" podStartSLOduration=3.152394348 podStartE2EDuration="14.542319581s" podCreationTimestamp="2025-10-14 13:41:18 +0000 UTC" firstStartedPulling="2025-10-14 13:41:20.0060689 +0000 UTC m=+2105.816358229" lastFinishedPulling="2025-10-14 13:41:31.395994133 +0000 UTC m=+2117.206283462" observedRunningTime="2025-10-14 13:41:32.538553091 +0000 UTC m=+2118.348842420" watchObservedRunningTime="2025-10-14 13:41:32.542319581 +0000 UTC m=+2118.352608930" Oct 14 13:41:40.944746 master-1 kubenswrapper[4740]: I1014 13:41:40.944685 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:41:40.945481 master-1 kubenswrapper[4740]: E1014 13:41:40.945066 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:41:48.555627 master-1 kubenswrapper[4740]: I1014 13:41:48.555546 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-hz8zc" Oct 14 13:41:55.943908 master-1 kubenswrapper[4740]: I1014 13:41:55.943845 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:41:55.944884 master-1 kubenswrapper[4740]: E1014 13:41:55.944167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:42:09.944849 master-1 kubenswrapper[4740]: I1014 13:42:09.944772 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:42:10.939008 master-1 kubenswrapper[4740]: I1014 13:42:10.938901 4740 generic.go:334] "Generic (PLEG): container finished" podID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerID="ac6094ae82f9f96d1aa43713994a633554205437390f72c9a9666008b12b485f" exitCode=1 Oct 14 13:42:10.939008 master-1 kubenswrapper[4740]: I1014 13:42:10.938988 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"ac6094ae82f9f96d1aa43713994a633554205437390f72c9a9666008b12b485f"} Oct 14 13:42:10.939571 master-1 kubenswrapper[4740]: I1014 13:42:10.939078 4740 scope.go:117] "RemoveContainer" containerID="d310de74da320f97e6980f4751f3cbae4ae1471b49808507ce87441441d5a2f7" Oct 14 13:42:10.940179 master-1 kubenswrapper[4740]: I1014 13:42:10.940138 4740 scope.go:117] "RemoveContainer" containerID="ac6094ae82f9f96d1aa43713994a633554205437390f72c9a9666008b12b485f" Oct 14 13:42:10.940482 master-1 kubenswrapper[4740]: E1014 13:42:10.940449 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=barbican-db-sync pod=barbican-db-sync-hd9hz_openstack(3314e007-8945-436e-b5bb-7a7d9bf583ba)\"" pod="openstack/barbican-db-sync-hd9hz" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" Oct 14 13:42:11.991025 master-1 kubenswrapper[4740]: I1014 13:42:11.990922 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hd9hz"] Oct 14 13:42:12.548221 master-1 kubenswrapper[4740]: I1014 13:42:12.548164 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:42:12.620280 master-1 kubenswrapper[4740]: I1014 13:42:12.619119 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-db-sync-config-data\") pod \"3314e007-8945-436e-b5bb-7a7d9bf583ba\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " Oct 14 13:42:12.620280 master-1 kubenswrapper[4740]: I1014 13:42:12.619276 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-combined-ca-bundle\") pod \"3314e007-8945-436e-b5bb-7a7d9bf583ba\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " Oct 14 13:42:12.620280 master-1 kubenswrapper[4740]: I1014 13:42:12.619345 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvgwc\" (UniqueName: \"kubernetes.io/projected/3314e007-8945-436e-b5bb-7a7d9bf583ba-kube-api-access-xvgwc\") pod \"3314e007-8945-436e-b5bb-7a7d9bf583ba\" (UID: \"3314e007-8945-436e-b5bb-7a7d9bf583ba\") " Oct 14 13:42:12.621916 master-1 kubenswrapper[4740]: I1014 13:42:12.621853 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3314e007-8945-436e-b5bb-7a7d9bf583ba" (UID: "3314e007-8945-436e-b5bb-7a7d9bf583ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:42:12.622459 master-1 kubenswrapper[4740]: I1014 13:42:12.622401 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3314e007-8945-436e-b5bb-7a7d9bf583ba-kube-api-access-xvgwc" (OuterVolumeSpecName: "kube-api-access-xvgwc") pod "3314e007-8945-436e-b5bb-7a7d9bf583ba" (UID: "3314e007-8945-436e-b5bb-7a7d9bf583ba"). InnerVolumeSpecName "kube-api-access-xvgwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:42:12.644183 master-1 kubenswrapper[4740]: I1014 13:42:12.644100 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3314e007-8945-436e-b5bb-7a7d9bf583ba" (UID: "3314e007-8945-436e-b5bb-7a7d9bf583ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:42:12.721946 master-1 kubenswrapper[4740]: I1014 13:42:12.721776 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-db-sync-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:42:12.721946 master-1 kubenswrapper[4740]: I1014 13:42:12.721813 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3314e007-8945-436e-b5bb-7a7d9bf583ba-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:42:12.721946 master-1 kubenswrapper[4740]: I1014 13:42:12.721822 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvgwc\" (UniqueName: \"kubernetes.io/projected/3314e007-8945-436e-b5bb-7a7d9bf583ba-kube-api-access-xvgwc\") on node \"master-1\" DevicePath \"\"" Oct 14 13:42:12.965188 master-1 kubenswrapper[4740]: I1014 13:42:12.965113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hd9hz" event={"ID":"3314e007-8945-436e-b5bb-7a7d9bf583ba","Type":"ContainerDied","Data":"2a22f02c55e823f6fb9ccf03f0af27f1369cc17d4a93e7f315883fb235c19ed2"} Oct 14 13:42:12.965188 master-1 kubenswrapper[4740]: I1014 13:42:12.965160 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hd9hz" Oct 14 13:42:12.965491 master-1 kubenswrapper[4740]: I1014 13:42:12.965197 4740 scope.go:117] "RemoveContainer" containerID="ac6094ae82f9f96d1aa43713994a633554205437390f72c9a9666008b12b485f" Oct 14 13:42:13.055120 master-1 kubenswrapper[4740]: I1014 13:42:13.055046 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hd9hz"] Oct 14 13:42:13.062011 master-1 kubenswrapper[4740]: I1014 13:42:13.061948 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hd9hz"] Oct 14 13:42:14.961062 master-1 kubenswrapper[4740]: I1014 13:42:14.960973 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" path="/var/lib/kubelet/pods/3314e007-8945-436e-b5bb-7a7d9bf583ba/volumes" Oct 14 13:42:24.010284 master-1 kubenswrapper[4740]: I1014 13:42:24.009163 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-l566g"] Oct 14 13:42:24.010284 master-1 kubenswrapper[4740]: E1014 13:42:24.010190 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.010284 master-1 kubenswrapper[4740]: I1014 13:42:24.010262 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: E1014 13:42:24.010310 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: I1014 13:42:24.010330 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: E1014 13:42:24.010358 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: I1014 13:42:24.010376 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: E1014 13:42:24.010406 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: I1014 13:42:24.010423 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: E1014 13:42:24.010460 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: I1014 13:42:24.010477 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: E1014 13:42:24.010497 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011175 master-1 kubenswrapper[4740]: I1014 13:42:24.010511 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011702 master-1 kubenswrapper[4740]: I1014 13:42:24.011491 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011702 master-1 kubenswrapper[4740]: I1014 13:42:24.011526 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011702 master-1 kubenswrapper[4740]: I1014 13:42:24.011547 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011702 master-1 kubenswrapper[4740]: I1014 13:42:24.011563 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011702 master-1 kubenswrapper[4740]: I1014 13:42:24.011591 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.011702 master-1 kubenswrapper[4740]: I1014 13:42:24.011616 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.012486 master-1 kubenswrapper[4740]: E1014 13:42:24.012450 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.012486 master-1 kubenswrapper[4740]: I1014 13:42:24.012484 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.012978 master-1 kubenswrapper[4740]: I1014 13:42:24.012930 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3314e007-8945-436e-b5bb-7a7d9bf583ba" containerName="barbican-db-sync" Oct 14 13:42:24.015098 master-1 kubenswrapper[4740]: I1014 13:42:24.015051 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.017838 master-1 kubenswrapper[4740]: I1014 13:42:24.017775 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Oct 14 13:42:24.018154 master-1 kubenswrapper[4740]: I1014 13:42:24.018126 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Oct 14 13:42:24.019044 master-1 kubenswrapper[4740]: I1014 13:42:24.018991 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Oct 14 13:42:24.035427 master-1 kubenswrapper[4740]: I1014 13:42:24.033499 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-l566g"] Oct 14 13:42:24.119291 master-1 kubenswrapper[4740]: I1014 13:42:24.118546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-combined-ca-bundle\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.119291 master-1 kubenswrapper[4740]: I1014 13:42:24.118642 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-amphora-certs\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.119291 master-1 kubenswrapper[4740]: I1014 13:42:24.118730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-config-data\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.119291 master-1 kubenswrapper[4740]: I1014 13:42:24.119101 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/49489c0a-ee38-461c-80bc-fe9f81662644-hm-ports\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.119291 master-1 kubenswrapper[4740]: I1014 13:42:24.119197 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/49489c0a-ee38-461c-80bc-fe9f81662644-config-data-merged\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.119810 master-1 kubenswrapper[4740]: I1014 13:42:24.119468 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-scripts\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.221805 master-1 kubenswrapper[4740]: I1014 13:42:24.221576 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-config-data\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.221805 master-1 kubenswrapper[4740]: I1014 13:42:24.221703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/49489c0a-ee38-461c-80bc-fe9f81662644-hm-ports\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.221805 master-1 kubenswrapper[4740]: I1014 13:42:24.221748 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/49489c0a-ee38-461c-80bc-fe9f81662644-config-data-merged\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.221805 master-1 kubenswrapper[4740]: I1014 13:42:24.221813 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-scripts\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.222449 master-1 kubenswrapper[4740]: I1014 13:42:24.221886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-combined-ca-bundle\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.222449 master-1 kubenswrapper[4740]: I1014 13:42:24.221914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-amphora-certs\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.223337 master-1 kubenswrapper[4740]: I1014 13:42:24.223272 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/49489c0a-ee38-461c-80bc-fe9f81662644-config-data-merged\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.224284 master-1 kubenswrapper[4740]: I1014 13:42:24.224218 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/49489c0a-ee38-461c-80bc-fe9f81662644-hm-ports\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.226500 master-1 kubenswrapper[4740]: I1014 13:42:24.226465 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-amphora-certs\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.227031 master-1 kubenswrapper[4740]: I1014 13:42:24.226982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-scripts\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.229418 master-1 kubenswrapper[4740]: I1014 13:42:24.229272 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-config-data\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.229492 master-1 kubenswrapper[4740]: I1014 13:42:24.229379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49489c0a-ee38-461c-80bc-fe9f81662644-combined-ca-bundle\") pod \"octavia-healthmanager-l566g\" (UID: \"49489c0a-ee38-461c-80bc-fe9f81662644\") " pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:24.334143 master-1 kubenswrapper[4740]: I1014 13:42:24.333909 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:25.402141 master-1 kubenswrapper[4740]: I1014 13:42:25.402098 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-l566g"] Oct 14 13:42:25.800892 master-1 kubenswrapper[4740]: I1014 13:42:25.800795 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-lfkzl"] Oct 14 13:42:25.802610 master-1 kubenswrapper[4740]: I1014 13:42:25.802587 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.805975 master-1 kubenswrapper[4740]: I1014 13:42:25.805940 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Oct 14 13:42:25.806187 master-1 kubenswrapper[4740]: I1014 13:42:25.806162 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Oct 14 13:42:25.818635 master-1 kubenswrapper[4740]: I1014 13:42:25.818581 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-lfkzl"] Oct 14 13:42:25.856595 master-1 kubenswrapper[4740]: I1014 13:42:25.856537 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/034801be-9048-4c08-a4b5-8460be470b08-hm-ports\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.857036 master-1 kubenswrapper[4740]: I1014 13:42:25.857016 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/034801be-9048-4c08-a4b5-8460be470b08-config-data-merged\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.857173 master-1 kubenswrapper[4740]: I1014 13:42:25.857160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-scripts\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.857332 master-1 kubenswrapper[4740]: I1014 13:42:25.857271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-combined-ca-bundle\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.857552 master-1 kubenswrapper[4740]: I1014 13:42:25.857536 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-amphora-certs\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.857698 master-1 kubenswrapper[4740]: I1014 13:42:25.857685 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-config-data\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.959732 master-1 kubenswrapper[4740]: I1014 13:42:25.959677 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/034801be-9048-4c08-a4b5-8460be470b08-hm-ports\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.959953 master-1 kubenswrapper[4740]: I1014 13:42:25.959827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/034801be-9048-4c08-a4b5-8460be470b08-config-data-merged\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.959953 master-1 kubenswrapper[4740]: I1014 13:42:25.959873 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-scripts\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.959953 master-1 kubenswrapper[4740]: I1014 13:42:25.959893 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-combined-ca-bundle\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.959953 master-1 kubenswrapper[4740]: I1014 13:42:25.959910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-amphora-certs\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.959953 master-1 kubenswrapper[4740]: I1014 13:42:25.959931 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-config-data\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.960520 master-1 kubenswrapper[4740]: I1014 13:42:25.960352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/034801be-9048-4c08-a4b5-8460be470b08-config-data-merged\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.963823 master-1 kubenswrapper[4740]: I1014 13:42:25.963795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/034801be-9048-4c08-a4b5-8460be470b08-hm-ports\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.964125 master-1 kubenswrapper[4740]: I1014 13:42:25.964098 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-config-data\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.964358 master-1 kubenswrapper[4740]: I1014 13:42:25.964293 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-combined-ca-bundle\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.964452 master-1 kubenswrapper[4740]: I1014 13:42:25.964428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-amphora-certs\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:25.965246 master-1 kubenswrapper[4740]: I1014 13:42:25.965203 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/034801be-9048-4c08-a4b5-8460be470b08-scripts\") pod \"octavia-housekeeping-lfkzl\" (UID: \"034801be-9048-4c08-a4b5-8460be470b08\") " pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:26.125325 master-1 kubenswrapper[4740]: I1014 13:42:26.125264 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:26.130145 master-1 kubenswrapper[4740]: I1014 13:42:26.130060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l566g" event={"ID":"49489c0a-ee38-461c-80bc-fe9f81662644","Type":"ContainerStarted","Data":"d493737f77c172351b8a7859a8c2d0538b7e8624b032159af16cf096d3ce3f44"} Oct 14 13:42:26.130271 master-1 kubenswrapper[4740]: I1014 13:42:26.130187 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l566g" event={"ID":"49489c0a-ee38-461c-80bc-fe9f81662644","Type":"ContainerStarted","Data":"305aa0e5e2c94d3d18a4503f48c81f16a0b13b54a237735fae5b9eca1aad6920"} Oct 14 13:42:26.734778 master-1 kubenswrapper[4740]: I1014 13:42:26.734673 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-lfkzl"] Oct 14 13:42:26.736405 master-1 kubenswrapper[4740]: W1014 13:42:26.735575 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod034801be_9048_4c08_a4b5_8460be470b08.slice/crio-ad0d32de1b3dbcc78b155fdefcfbf3e5ffdd790f2d7af4056d1613324d0c1226 WatchSource:0}: Error finding container ad0d32de1b3dbcc78b155fdefcfbf3e5ffdd790f2d7af4056d1613324d0c1226: Status 404 returned error can't find the container with id ad0d32de1b3dbcc78b155fdefcfbf3e5ffdd790f2d7af4056d1613324d0c1226 Oct 14 13:42:26.966532 master-1 kubenswrapper[4740]: I1014 13:42:26.966379 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-x4x4g"] Oct 14 13:42:26.969783 master-1 kubenswrapper[4740]: I1014 13:42:26.969743 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.973350 master-1 kubenswrapper[4740]: I1014 13:42:26.973200 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Oct 14 13:42:26.973622 master-1 kubenswrapper[4740]: I1014 13:42:26.973192 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Oct 14 13:42:26.988761 master-1 kubenswrapper[4740]: I1014 13:42:26.988704 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-amphora-certs\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.988909 master-1 kubenswrapper[4740]: I1014 13:42:26.988792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5fb68e00-21a5-4d56-8c41-c2110f3024d4-hm-ports\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.988909 master-1 kubenswrapper[4740]: I1014 13:42:26.988867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-scripts\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.988909 master-1 kubenswrapper[4740]: I1014 13:42:26.988894 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fb68e00-21a5-4d56-8c41-c2110f3024d4-config-data-merged\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.989057 master-1 kubenswrapper[4740]: I1014 13:42:26.988954 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-config-data\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.989057 master-1 kubenswrapper[4740]: I1014 13:42:26.988978 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-combined-ca-bundle\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:26.989858 master-1 kubenswrapper[4740]: I1014 13:42:26.989802 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-x4x4g"] Oct 14 13:42:27.092802 master-1 kubenswrapper[4740]: I1014 13:42:27.092429 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-amphora-certs\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.092802 master-1 kubenswrapper[4740]: I1014 13:42:27.092553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5fb68e00-21a5-4d56-8c41-c2110f3024d4-hm-ports\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.096999 master-1 kubenswrapper[4740]: I1014 13:42:27.094209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5fb68e00-21a5-4d56-8c41-c2110f3024d4-hm-ports\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.096999 master-1 kubenswrapper[4740]: I1014 13:42:27.094284 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-scripts\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.096999 master-1 kubenswrapper[4740]: I1014 13:42:27.094918 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fb68e00-21a5-4d56-8c41-c2110f3024d4-config-data-merged\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.096999 master-1 kubenswrapper[4740]: I1014 13:42:27.094940 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5fb68e00-21a5-4d56-8c41-c2110f3024d4-config-data-merged\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.096999 master-1 kubenswrapper[4740]: I1014 13:42:27.095031 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-config-data\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.096999 master-1 kubenswrapper[4740]: I1014 13:42:27.095062 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-combined-ca-bundle\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.098258 master-1 kubenswrapper[4740]: I1014 13:42:27.098202 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-amphora-certs\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.098814 master-1 kubenswrapper[4740]: I1014 13:42:27.098788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-scripts\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.099851 master-1 kubenswrapper[4740]: I1014 13:42:27.099127 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-combined-ca-bundle\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.099851 master-1 kubenswrapper[4740]: I1014 13:42:27.099304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fb68e00-21a5-4d56-8c41-c2110f3024d4-config-data\") pod \"octavia-worker-x4x4g\" (UID: \"5fb68e00-21a5-4d56-8c41-c2110f3024d4\") " pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.140087 master-1 kubenswrapper[4740]: I1014 13:42:27.140033 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-lfkzl" event={"ID":"034801be-9048-4c08-a4b5-8460be470b08","Type":"ContainerStarted","Data":"ad0d32de1b3dbcc78b155fdefcfbf3e5ffdd790f2d7af4056d1613324d0c1226"} Oct 14 13:42:27.288778 master-1 kubenswrapper[4740]: I1014 13:42:27.288730 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:27.872102 master-1 kubenswrapper[4740]: I1014 13:42:27.872026 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-x4x4g"] Oct 14 13:42:28.169213 master-1 kubenswrapper[4740]: I1014 13:42:28.169137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-x4x4g" event={"ID":"5fb68e00-21a5-4d56-8c41-c2110f3024d4","Type":"ContainerStarted","Data":"0e8597255f29c7b9318c7122687798cf6b321c5df6d14c8c87e5b50ee1968c78"} Oct 14 13:42:28.172213 master-1 kubenswrapper[4740]: I1014 13:42:28.172163 4740 generic.go:334] "Generic (PLEG): container finished" podID="49489c0a-ee38-461c-80bc-fe9f81662644" containerID="d493737f77c172351b8a7859a8c2d0538b7e8624b032159af16cf096d3ce3f44" exitCode=0 Oct 14 13:42:28.172213 master-1 kubenswrapper[4740]: I1014 13:42:28.172201 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l566g" event={"ID":"49489c0a-ee38-461c-80bc-fe9f81662644","Type":"ContainerDied","Data":"d493737f77c172351b8a7859a8c2d0538b7e8624b032159af16cf096d3ce3f44"} Oct 14 13:42:29.183349 master-1 kubenswrapper[4740]: I1014 13:42:29.183172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l566g" event={"ID":"49489c0a-ee38-461c-80bc-fe9f81662644","Type":"ContainerStarted","Data":"d522b827a64423dd101608ba5e62d0e19f3ff15beb556c498afe4a752dfaf374"} Oct 14 13:42:29.183851 master-1 kubenswrapper[4740]: I1014 13:42:29.183393 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:29.188141 master-1 kubenswrapper[4740]: I1014 13:42:29.187193 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-lfkzl" event={"ID":"034801be-9048-4c08-a4b5-8460be470b08","Type":"ContainerStarted","Data":"10e6ed5d262257eb66c953d7839185dcda47e47deac7ec887b5dd358924ba79b"} Oct 14 13:42:29.228113 master-1 kubenswrapper[4740]: I1014 13:42:29.227768 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-l566g" podStartSLOduration=6.227739269 podStartE2EDuration="6.227739269s" podCreationTimestamp="2025-10-14 13:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:42:29.225988993 +0000 UTC m=+2175.036278342" watchObservedRunningTime="2025-10-14 13:42:29.227739269 +0000 UTC m=+2175.038028598" Oct 14 13:42:30.205147 master-1 kubenswrapper[4740]: I1014 13:42:30.205085 4740 generic.go:334] "Generic (PLEG): container finished" podID="034801be-9048-4c08-a4b5-8460be470b08" containerID="10e6ed5d262257eb66c953d7839185dcda47e47deac7ec887b5dd358924ba79b" exitCode=0 Oct 14 13:42:30.205835 master-1 kubenswrapper[4740]: I1014 13:42:30.205147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-lfkzl" event={"ID":"034801be-9048-4c08-a4b5-8460be470b08","Type":"ContainerDied","Data":"10e6ed5d262257eb66c953d7839185dcda47e47deac7ec887b5dd358924ba79b"} Oct 14 13:42:31.214784 master-1 kubenswrapper[4740]: I1014 13:42:31.214714 4740 generic.go:334] "Generic (PLEG): container finished" podID="5fb68e00-21a5-4d56-8c41-c2110f3024d4" containerID="c4ecc148cf19409e3495b925808c22cfe865a547d1d90b210291a76420706bb2" exitCode=0 Oct 14 13:42:31.215367 master-1 kubenswrapper[4740]: I1014 13:42:31.214817 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-x4x4g" event={"ID":"5fb68e00-21a5-4d56-8c41-c2110f3024d4","Type":"ContainerDied","Data":"c4ecc148cf19409e3495b925808c22cfe865a547d1d90b210291a76420706bb2"} Oct 14 13:42:31.217923 master-1 kubenswrapper[4740]: I1014 13:42:31.217856 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-lfkzl" event={"ID":"034801be-9048-4c08-a4b5-8460be470b08","Type":"ContainerStarted","Data":"a39eb9e2a50f169c47cb395c44012e840a8bf63ec230d8a1879b4cd75b85c015"} Oct 14 13:42:31.218015 master-1 kubenswrapper[4740]: I1014 13:42:31.217984 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:31.280532 master-1 kubenswrapper[4740]: I1014 13:42:31.280431 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-lfkzl" podStartSLOduration=4.891512014 podStartE2EDuration="6.280407709s" podCreationTimestamp="2025-10-14 13:42:25 +0000 UTC" firstStartedPulling="2025-10-14 13:42:26.739940772 +0000 UTC m=+2172.550230101" lastFinishedPulling="2025-10-14 13:42:28.128836467 +0000 UTC m=+2173.939125796" observedRunningTime="2025-10-14 13:42:31.273163627 +0000 UTC m=+2177.083452966" watchObservedRunningTime="2025-10-14 13:42:31.280407709 +0000 UTC m=+2177.090697038" Oct 14 13:42:32.235477 master-1 kubenswrapper[4740]: I1014 13:42:32.235379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-x4x4g" event={"ID":"5fb68e00-21a5-4d56-8c41-c2110f3024d4","Type":"ContainerStarted","Data":"1696d54b1905ec8d3f88cd236297a1b65d74be502eaaa2f60bc2bbc11aaaa406"} Oct 14 13:42:32.236458 master-1 kubenswrapper[4740]: I1014 13:42:32.235644 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-x4x4g" Oct 14 13:42:32.278738 master-1 kubenswrapper[4740]: I1014 13:42:32.278622 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-x4x4g" podStartSLOduration=4.59529215 podStartE2EDuration="6.278590943s" podCreationTimestamp="2025-10-14 13:42:26 +0000 UTC" firstStartedPulling="2025-10-14 13:42:28.097436666 +0000 UTC m=+2173.907725995" lastFinishedPulling="2025-10-14 13:42:29.780735459 +0000 UTC m=+2175.591024788" observedRunningTime="2025-10-14 13:42:32.261439689 +0000 UTC m=+2178.071729028" watchObservedRunningTime="2025-10-14 13:42:32.278590943 +0000 UTC m=+2178.088880312" Oct 14 13:42:39.373668 master-1 kubenswrapper[4740]: I1014 13:42:39.373602 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-l566g" Oct 14 13:42:41.169880 master-1 kubenswrapper[4740]: I1014 13:42:41.169743 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-lfkzl" Oct 14 13:42:42.324104 master-1 kubenswrapper[4740]: I1014 13:42:42.324052 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-x4x4g" Oct 14 13:45:00.181251 master-1 kubenswrapper[4740]: I1014 13:45:00.181165 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv"] Oct 14 13:45:00.182779 master-1 kubenswrapper[4740]: I1014 13:45:00.182678 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.185944 master-1 kubenswrapper[4740]: I1014 13:45:00.185829 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-t5gjh" Oct 14 13:45:00.186268 master-1 kubenswrapper[4740]: I1014 13:45:00.185877 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 13:45:00.195683 master-1 kubenswrapper[4740]: I1014 13:45:00.195624 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv"] Oct 14 13:45:00.288090 master-1 kubenswrapper[4740]: I1014 13:45:00.288025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-config-volume\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.288090 master-1 kubenswrapper[4740]: I1014 13:45:00.288089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kssr6\" (UniqueName: \"kubernetes.io/projected/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-kube-api-access-kssr6\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.288437 master-1 kubenswrapper[4740]: I1014 13:45:00.288181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-secret-volume\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.390699 master-1 kubenswrapper[4740]: I1014 13:45:00.390542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-config-volume\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.390699 master-1 kubenswrapper[4740]: I1014 13:45:00.390664 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kssr6\" (UniqueName: \"kubernetes.io/projected/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-kube-api-access-kssr6\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.391005 master-1 kubenswrapper[4740]: I1014 13:45:00.390988 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-secret-volume\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.393013 master-1 kubenswrapper[4740]: I1014 13:45:00.392909 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-config-volume\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.394964 master-1 kubenswrapper[4740]: I1014 13:45:00.394878 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-secret-volume\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.424992 master-1 kubenswrapper[4740]: I1014 13:45:00.417950 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kssr6\" (UniqueName: \"kubernetes.io/projected/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-kube-api-access-kssr6\") pod \"collect-profiles-29340825-szpzv\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:00.509732 master-1 kubenswrapper[4740]: I1014 13:45:00.509648 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:01.048278 master-1 kubenswrapper[4740]: I1014 13:45:01.048212 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv"] Oct 14 13:45:01.057409 master-1 kubenswrapper[4740]: W1014 13:45:01.057326 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceec84fd_5472_4cbc_a11b_f041e5fd2d46.slice/crio-433d913a139b0e7cc2523cdc81d82fb8a5d0b67c0f48146e2c162005985722a8 WatchSource:0}: Error finding container 433d913a139b0e7cc2523cdc81d82fb8a5d0b67c0f48146e2c162005985722a8: Status 404 returned error can't find the container with id 433d913a139b0e7cc2523cdc81d82fb8a5d0b67c0f48146e2c162005985722a8 Oct 14 13:45:01.828618 master-1 kubenswrapper[4740]: I1014 13:45:01.828504 4740 generic.go:334] "Generic (PLEG): container finished" podID="ceec84fd-5472-4cbc-a11b-f041e5fd2d46" containerID="5692c84955c6122d7875a0105cca78594aa5bc23cd26c1069789f4556d4c680c" exitCode=0 Oct 14 13:45:01.828618 master-1 kubenswrapper[4740]: I1014 13:45:01.828599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" event={"ID":"ceec84fd-5472-4cbc-a11b-f041e5fd2d46","Type":"ContainerDied","Data":"5692c84955c6122d7875a0105cca78594aa5bc23cd26c1069789f4556d4c680c"} Oct 14 13:45:01.829626 master-1 kubenswrapper[4740]: I1014 13:45:01.828651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" event={"ID":"ceec84fd-5472-4cbc-a11b-f041e5fd2d46","Type":"ContainerStarted","Data":"433d913a139b0e7cc2523cdc81d82fb8a5d0b67c0f48146e2c162005985722a8"} Oct 14 13:45:03.374537 master-1 kubenswrapper[4740]: I1014 13:45:03.374469 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:03.463622 master-1 kubenswrapper[4740]: I1014 13:45:03.463522 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-secret-volume\") pod \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " Oct 14 13:45:03.463622 master-1 kubenswrapper[4740]: I1014 13:45:03.463618 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kssr6\" (UniqueName: \"kubernetes.io/projected/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-kube-api-access-kssr6\") pod \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " Oct 14 13:45:03.464082 master-1 kubenswrapper[4740]: I1014 13:45:03.463787 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-config-volume\") pod \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\" (UID: \"ceec84fd-5472-4cbc-a11b-f041e5fd2d46\") " Oct 14 13:45:03.465373 master-1 kubenswrapper[4740]: I1014 13:45:03.464604 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-config-volume" (OuterVolumeSpecName: "config-volume") pod "ceec84fd-5472-4cbc-a11b-f041e5fd2d46" (UID: "ceec84fd-5472-4cbc-a11b-f041e5fd2d46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:45:03.467320 master-1 kubenswrapper[4740]: I1014 13:45:03.467225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ceec84fd-5472-4cbc-a11b-f041e5fd2d46" (UID: "ceec84fd-5472-4cbc-a11b-f041e5fd2d46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:45:03.470714 master-1 kubenswrapper[4740]: I1014 13:45:03.470378 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-kube-api-access-kssr6" (OuterVolumeSpecName: "kube-api-access-kssr6") pod "ceec84fd-5472-4cbc-a11b-f041e5fd2d46" (UID: "ceec84fd-5472-4cbc-a11b-f041e5fd2d46"). InnerVolumeSpecName "kube-api-access-kssr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:45:03.567639 master-1 kubenswrapper[4740]: I1014 13:45:03.567445 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 13:45:03.567985 master-1 kubenswrapper[4740]: I1014 13:45:03.567959 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 13:45:03.568134 master-1 kubenswrapper[4740]: I1014 13:45:03.568108 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kssr6\" (UniqueName: \"kubernetes.io/projected/ceec84fd-5472-4cbc-a11b-f041e5fd2d46-kube-api-access-kssr6\") on node \"master-1\" DevicePath \"\"" Oct 14 13:45:03.868304 master-1 kubenswrapper[4740]: I1014 13:45:03.868051 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" event={"ID":"ceec84fd-5472-4cbc-a11b-f041e5fd2d46","Type":"ContainerDied","Data":"433d913a139b0e7cc2523cdc81d82fb8a5d0b67c0f48146e2c162005985722a8"} Oct 14 13:45:03.868304 master-1 kubenswrapper[4740]: I1014 13:45:03.868136 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433d913a139b0e7cc2523cdc81d82fb8a5d0b67c0f48146e2c162005985722a8" Oct 14 13:45:03.868304 master-1 kubenswrapper[4740]: I1014 13:45:03.868189 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv" Oct 14 13:45:28.138650 master-1 kubenswrapper[4740]: I1014 13:45:28.138551 4740 scope.go:117] "RemoveContainer" containerID="8dd96197bc75e254b98fcd8d332a2bca0a60437b93e392c3892305ff01c6c560" Oct 14 13:45:28.165898 master-1 kubenswrapper[4740]: I1014 13:45:28.165841 4740 scope.go:117] "RemoveContainer" containerID="bb6216313388627f07cc5f9d7f3fc804b44df3998a68a98115ab7d89403eecc4" Oct 14 13:45:35.093119 master-1 kubenswrapper[4740]: I1014 13:45:35.093009 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9jggw"] Oct 14 13:45:35.121539 master-1 kubenswrapper[4740]: I1014 13:45:35.121445 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9jggw"] Oct 14 13:45:36.963634 master-1 kubenswrapper[4740]: I1014 13:45:36.963542 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7697c8-a46f-40f0-ab6a-e02b46a7a832" path="/var/lib/kubelet/pods/cc7697c8-a46f-40f0-ab6a-e02b46a7a832/volumes" Oct 14 13:45:50.063749 master-1 kubenswrapper[4740]: I1014 13:45:50.063668 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pxfvm"] Oct 14 13:45:50.076109 master-1 kubenswrapper[4740]: I1014 13:45:50.076019 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pxfvm"] Oct 14 13:45:50.958797 master-1 kubenswrapper[4740]: I1014 13:45:50.958705 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ead27a-a1bb-4c69-8ecb-b982d0ca526b" path="/var/lib/kubelet/pods/14ead27a-a1bb-4c69-8ecb-b982d0ca526b/volumes" Oct 14 13:46:20.067664 master-1 kubenswrapper[4740]: I1014 13:46:20.067544 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-sx22g"] Oct 14 13:46:20.080009 master-1 kubenswrapper[4740]: I1014 13:46:20.079941 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-sx22g"] Oct 14 13:46:20.956951 master-1 kubenswrapper[4740]: I1014 13:46:20.956880 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de58ce43-1433-46b0-9f48-d8add8324fe5" path="/var/lib/kubelet/pods/de58ce43-1433-46b0-9f48-d8add8324fe5/volumes" Oct 14 13:46:23.060730 master-1 kubenswrapper[4740]: I1014 13:46:23.060644 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bc7jg"] Oct 14 13:46:23.067113 master-1 kubenswrapper[4740]: I1014 13:46:23.067050 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bc7jg"] Oct 14 13:46:24.052959 master-1 kubenswrapper[4740]: I1014 13:46:24.052879 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-z669w"] Oct 14 13:46:24.060436 master-1 kubenswrapper[4740]: I1014 13:46:24.060366 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-z669w"] Oct 14 13:46:24.964526 master-1 kubenswrapper[4740]: I1014 13:46:24.964465 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07974c63-665d-43bd-a568-286d26004725" path="/var/lib/kubelet/pods/07974c63-665d-43bd-a568-286d26004725/volumes" Oct 14 13:46:24.965082 master-1 kubenswrapper[4740]: I1014 13:46:24.965055 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28738a5a-94be-43a4-a55e-720365a4246b" path="/var/lib/kubelet/pods/28738a5a-94be-43a4-a55e-720365a4246b/volumes" Oct 14 13:46:28.222440 master-1 kubenswrapper[4740]: I1014 13:46:28.222312 4740 scope.go:117] "RemoveContainer" containerID="7baac481e755941c7afac5ebf22810288b8bee1a77644b515f33d648251687c1" Oct 14 13:46:28.273417 master-1 kubenswrapper[4740]: I1014 13:46:28.273298 4740 scope.go:117] "RemoveContainer" containerID="e2bcf28fa5173e32513fb968032043ccd5d5c391a650841af519d43ddea80c60" Oct 14 13:46:28.301690 master-1 kubenswrapper[4740]: I1014 13:46:28.301617 4740 scope.go:117] "RemoveContainer" containerID="17d5fd8df9c1cb34d0157c57c77ceaf1da15942e4119806c05cc8987c0cbf8a8" Oct 14 13:46:28.375067 master-1 kubenswrapper[4740]: I1014 13:46:28.375016 4740 scope.go:117] "RemoveContainer" containerID="63ea5e6a1add31aaff94a0cc365478d8470e2d693de0a8f0ad07a0baf4d57f47" Oct 14 13:46:28.401637 master-1 kubenswrapper[4740]: I1014 13:46:28.401575 4740 scope.go:117] "RemoveContainer" containerID="75e10b515b7197d9698e3991f1054c359ae157c60822b216a693d51035babca0" Oct 14 13:46:31.070900 master-1 kubenswrapper[4740]: I1014 13:46:31.070802 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-b28pf"] Oct 14 13:46:31.079463 master-1 kubenswrapper[4740]: I1014 13:46:31.079399 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-b28pf"] Oct 14 13:46:32.052203 master-1 kubenswrapper[4740]: I1014 13:46:32.052139 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46645-db-sync-bn4lj"] Oct 14 13:46:32.060084 master-1 kubenswrapper[4740]: I1014 13:46:32.059882 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-46645-db-sync-bn4lj"] Oct 14 13:46:32.959254 master-1 kubenswrapper[4740]: I1014 13:46:32.959089 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97045127-d8fb-49d6-8a81-816517ba472d" path="/var/lib/kubelet/pods/97045127-d8fb-49d6-8a81-816517ba472d/volumes" Oct 14 13:46:32.960412 master-1 kubenswrapper[4740]: I1014 13:46:32.960070 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f31b4a-3d7a-4274-befd-82f1bc035e07" path="/var/lib/kubelet/pods/e3f31b4a-3d7a-4274-befd-82f1bc035e07/volumes" Oct 14 13:47:28.536027 master-1 kubenswrapper[4740]: I1014 13:47:28.535882 4740 scope.go:117] "RemoveContainer" containerID="64f6ca22fec4006b855c5f2f150e55db7483bc312f32e0ebc6f1f255917c6710" Oct 14 13:47:28.623725 master-1 kubenswrapper[4740]: I1014 13:47:28.623617 4740 scope.go:117] "RemoveContainer" containerID="ddac44e8e70f5e96ad9e6a23164b8004361542efab2488d438c25a765cd435a2" Oct 14 13:48:03.071215 master-1 kubenswrapper[4740]: I1014 13:48:03.071099 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-vgnvk"] Oct 14 13:48:03.091424 master-1 kubenswrapper[4740]: I1014 13:48:03.091313 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-vgnvk"] Oct 14 13:48:04.966072 master-1 kubenswrapper[4740]: I1014 13:48:04.965983 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc0b252-d950-4ddd-8788-4fdc12cce585" path="/var/lib/kubelet/pods/abc0b252-d950-4ddd-8788-4fdc12cce585/volumes" Oct 14 13:48:13.062002 master-1 kubenswrapper[4740]: I1014 13:48:13.061898 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-d329-account-create-g8tsl"] Oct 14 13:48:13.074117 master-1 kubenswrapper[4740]: I1014 13:48:13.074019 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-d329-account-create-g8tsl"] Oct 14 13:48:14.957538 master-1 kubenswrapper[4740]: I1014 13:48:14.957447 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc4b770-906b-416c-9f2b-a9a4bfbea3b4" path="/var/lib/kubelet/pods/cfc4b770-906b-416c-9f2b-a9a4bfbea3b4/volumes" Oct 14 13:48:28.710225 master-1 kubenswrapper[4740]: I1014 13:48:28.710153 4740 scope.go:117] "RemoveContainer" containerID="70303063508b347fa472e66ad393f2aceb33c9134e23bec67dfa14ec1c5ce52c" Oct 14 13:48:28.751020 master-1 kubenswrapper[4740]: I1014 13:48:28.750986 4740 scope.go:117] "RemoveContainer" containerID="ac82c31ac2185f1368e4846fddb7cbe03a10a34702628bccf6254a9f9bcc044e" Oct 14 13:50:10.457095 master-1 kubenswrapper[4740]: I1014 13:50:10.457023 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-f85dff564-q5t6l" podUID="c5561ae4-eb1f-47ba-929b-c2b25b1efc8f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 14 13:52:13.225540 master-1 kubenswrapper[4740]: I1014 13:52:13.225340 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dtcg4"] Oct 14 13:52:13.227786 master-1 kubenswrapper[4740]: E1014 13:52:13.226278 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceec84fd-5472-4cbc-a11b-f041e5fd2d46" containerName="collect-profiles" Oct 14 13:52:13.227786 master-1 kubenswrapper[4740]: I1014 13:52:13.226300 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceec84fd-5472-4cbc-a11b-f041e5fd2d46" containerName="collect-profiles" Oct 14 13:52:13.227786 master-1 kubenswrapper[4740]: I1014 13:52:13.226638 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceec84fd-5472-4cbc-a11b-f041e5fd2d46" containerName="collect-profiles" Oct 14 13:52:13.228756 master-1 kubenswrapper[4740]: I1014 13:52:13.228594 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.233852 master-1 kubenswrapper[4740]: I1014 13:52:13.233733 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 13:52:13.257607 master-1 kubenswrapper[4740]: I1014 13:52:13.257529 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dtcg4"] Oct 14 13:52:13.368007 master-1 kubenswrapper[4740]: I1014 13:52:13.367892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-combined-ca-bundle\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.368386 master-1 kubenswrapper[4740]: I1014 13:52:13.368159 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-db-sync-config-data\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.368386 master-1 kubenswrapper[4740]: I1014 13:52:13.368340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5q8\" (UniqueName: \"kubernetes.io/projected/1a5b94c7-091a-4ada-bce0-85931aa7eb50-kube-api-access-wl5q8\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.469881 master-1 kubenswrapper[4740]: I1014 13:52:13.469843 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-db-sync-config-data\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.470171 master-1 kubenswrapper[4740]: I1014 13:52:13.470150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5q8\" (UniqueName: \"kubernetes.io/projected/1a5b94c7-091a-4ada-bce0-85931aa7eb50-kube-api-access-wl5q8\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.470395 master-1 kubenswrapper[4740]: I1014 13:52:13.470377 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-combined-ca-bundle\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.473755 master-1 kubenswrapper[4740]: I1014 13:52:13.473737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-combined-ca-bundle\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.475383 master-1 kubenswrapper[4740]: I1014 13:52:13.475336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-db-sync-config-data\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.501604 master-1 kubenswrapper[4740]: I1014 13:52:13.501512 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5q8\" (UniqueName: \"kubernetes.io/projected/1a5b94c7-091a-4ada-bce0-85931aa7eb50-kube-api-access-wl5q8\") pod \"barbican-db-sync-dtcg4\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:13.561117 master-1 kubenswrapper[4740]: I1014 13:52:13.561021 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:52:14.050944 master-1 kubenswrapper[4740]: W1014 13:52:14.050851 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a5b94c7_091a_4ada_bce0_85931aa7eb50.slice/crio-4543ed220e5137388000c7c420db35b29fdcb3bb65707fad6ac8c0aff6ff32f1 WatchSource:0}: Error finding container 4543ed220e5137388000c7c420db35b29fdcb3bb65707fad6ac8c0aff6ff32f1: Status 404 returned error can't find the container with id 4543ed220e5137388000c7c420db35b29fdcb3bb65707fad6ac8c0aff6ff32f1 Oct 14 13:52:14.052908 master-1 kubenswrapper[4740]: I1014 13:52:14.052842 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dtcg4"] Oct 14 13:52:14.562055 master-1 kubenswrapper[4740]: I1014 13:52:14.561983 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"1b3abd8ca2eaad4931e3bc1bdb0165fdfd6c34e278163353c5e71c6dd4ec144d"} Oct 14 13:52:14.562055 master-1 kubenswrapper[4740]: I1014 13:52:14.562053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"4543ed220e5137388000c7c420db35b29fdcb3bb65707fad6ac8c0aff6ff32f1"} Oct 14 13:52:14.592678 master-1 kubenswrapper[4740]: I1014 13:52:14.592575 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dtcg4" podStartSLOduration=1.5925539469999999 podStartE2EDuration="1.592553947s" podCreationTimestamp="2025-10-14 13:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:52:14.585552729 +0000 UTC m=+2760.395842058" watchObservedRunningTime="2025-10-14 13:52:14.592553947 +0000 UTC m=+2760.402843296" Oct 14 13:52:15.572814 master-1 kubenswrapper[4740]: I1014 13:52:15.572706 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="1b3abd8ca2eaad4931e3bc1bdb0165fdfd6c34e278163353c5e71c6dd4ec144d" exitCode=1 Oct 14 13:52:15.572814 master-1 kubenswrapper[4740]: I1014 13:52:15.572761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"1b3abd8ca2eaad4931e3bc1bdb0165fdfd6c34e278163353c5e71c6dd4ec144d"} Oct 14 13:52:15.573996 master-1 kubenswrapper[4740]: I1014 13:52:15.573643 4740 scope.go:117] "RemoveContainer" containerID="1b3abd8ca2eaad4931e3bc1bdb0165fdfd6c34e278163353c5e71c6dd4ec144d" Oct 14 13:52:16.586524 master-1 kubenswrapper[4740]: I1014 13:52:16.586435 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014"} Oct 14 13:52:17.605533 master-1 kubenswrapper[4740]: I1014 13:52:17.605473 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014" exitCode=1 Oct 14 13:52:17.606669 master-1 kubenswrapper[4740]: I1014 13:52:17.605530 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014"} Oct 14 13:52:17.606669 master-1 kubenswrapper[4740]: I1014 13:52:17.605596 4740 scope.go:117] "RemoveContainer" containerID="1b3abd8ca2eaad4931e3bc1bdb0165fdfd6c34e278163353c5e71c6dd4ec144d" Oct 14 13:52:17.607186 master-1 kubenswrapper[4740]: I1014 13:52:17.607114 4740 scope.go:117] "RemoveContainer" containerID="a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014" Oct 14 13:52:17.608077 master-1 kubenswrapper[4740]: E1014 13:52:17.607830 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:52:18.624797 master-1 kubenswrapper[4740]: I1014 13:52:18.624733 4740 scope.go:117] "RemoveContainer" containerID="a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014" Oct 14 13:52:18.625518 master-1 kubenswrapper[4740]: E1014 13:52:18.625122 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:52:33.944429 master-1 kubenswrapper[4740]: I1014 13:52:33.944365 4740 scope.go:117] "RemoveContainer" containerID="a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014" Oct 14 13:52:34.808496 master-1 kubenswrapper[4740]: I1014 13:52:34.808381 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a"} Oct 14 13:52:35.824864 master-1 kubenswrapper[4740]: I1014 13:52:35.824777 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a" exitCode=1 Oct 14 13:52:35.826025 master-1 kubenswrapper[4740]: I1014 13:52:35.824900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a"} Oct 14 13:52:35.826025 master-1 kubenswrapper[4740]: I1014 13:52:35.825027 4740 scope.go:117] "RemoveContainer" containerID="a4fee1b89e4f0c0d0f79f01aeea1c7c569d3662a634faa2b059f3821780e5014" Oct 14 13:52:35.826025 master-1 kubenswrapper[4740]: I1014 13:52:35.825932 4740 scope.go:117] "RemoveContainer" containerID="c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a" Oct 14 13:52:35.826738 master-1 kubenswrapper[4740]: E1014 13:52:35.826353 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:52:46.946789 master-1 kubenswrapper[4740]: I1014 13:52:46.946731 4740 scope.go:117] "RemoveContainer" containerID="c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a" Oct 14 13:52:46.949123 master-1 kubenswrapper[4740]: E1014 13:52:46.949016 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:52:57.944370 master-1 kubenswrapper[4740]: I1014 13:52:57.944306 4740 scope.go:117] "RemoveContainer" containerID="c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a" Oct 14 13:52:59.061555 master-1 kubenswrapper[4740]: E1014 13:52:59.061452 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a5b94c7_091a_4ada_bce0_85931aa7eb50.slice/crio-conmon-e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:52:59.062450 master-1 kubenswrapper[4740]: E1014 13:52:59.061617 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a5b94c7_091a_4ada_bce0_85931aa7eb50.slice/crio-conmon-e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a5b94c7_091a_4ada_bce0_85931aa7eb50.slice/crio-e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:52:59.100507 master-1 kubenswrapper[4740]: I1014 13:52:59.100451 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" exitCode=1 Oct 14 13:52:59.100777 master-1 kubenswrapper[4740]: I1014 13:52:59.100574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37"} Oct 14 13:52:59.100929 master-1 kubenswrapper[4740]: I1014 13:52:59.100908 4740 scope.go:117] "RemoveContainer" containerID="c5f389295a6d60529599d90d581f83ffcedb15bf7ed1ea0b5f3336b30f759e2a" Oct 14 13:52:59.103791 master-1 kubenswrapper[4740]: I1014 13:52:59.103744 4740 scope.go:117] "RemoveContainer" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" Oct 14 13:52:59.104378 master-1 kubenswrapper[4740]: E1014 13:52:59.104328 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:53:12.944953 master-1 kubenswrapper[4740]: I1014 13:53:12.944832 4740 scope.go:117] "RemoveContainer" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" Oct 14 13:53:12.946144 master-1 kubenswrapper[4740]: E1014 13:53:12.945599 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:53:24.956153 master-1 kubenswrapper[4740]: I1014 13:53:24.956048 4740 scope.go:117] "RemoveContainer" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" Oct 14 13:53:24.957363 master-1 kubenswrapper[4740]: E1014 13:53:24.956760 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:53:35.944416 master-1 kubenswrapper[4740]: I1014 13:53:35.944326 4740 scope.go:117] "RemoveContainer" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" Oct 14 13:53:35.945662 master-1 kubenswrapper[4740]: E1014 13:53:35.945081 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:53:46.944122 master-1 kubenswrapper[4740]: I1014 13:53:46.944039 4740 scope.go:117] "RemoveContainer" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" Oct 14 13:53:47.617316 master-1 kubenswrapper[4740]: I1014 13:53:47.617259 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e"} Oct 14 13:53:48.632819 master-1 kubenswrapper[4740]: I1014 13:53:48.632703 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" exitCode=1 Oct 14 13:53:48.632819 master-1 kubenswrapper[4740]: I1014 13:53:48.632783 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e"} Oct 14 13:53:48.632819 master-1 kubenswrapper[4740]: I1014 13:53:48.632848 4740 scope.go:117] "RemoveContainer" containerID="e5133c22898718f68bc781852f76d15334fd4842d1082b41588f189433408d37" Oct 14 13:53:48.634396 master-1 kubenswrapper[4740]: I1014 13:53:48.634318 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:53:48.636296 master-1 kubenswrapper[4740]: E1014 13:53:48.634904 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:54:00.944157 master-1 kubenswrapper[4740]: I1014 13:54:00.944066 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:54:00.945594 master-1 kubenswrapper[4740]: E1014 13:54:00.944551 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:54:13.944934 master-1 kubenswrapper[4740]: I1014 13:54:13.944804 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:54:13.946402 master-1 kubenswrapper[4740]: E1014 13:54:13.945428 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:54:25.946272 master-1 kubenswrapper[4740]: I1014 13:54:25.946134 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:54:25.950769 master-1 kubenswrapper[4740]: E1014 13:54:25.950622 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:54:37.944106 master-1 kubenswrapper[4740]: I1014 13:54:37.944029 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:54:37.944888 master-1 kubenswrapper[4740]: E1014 13:54:37.944389 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:54:50.944821 master-1 kubenswrapper[4740]: I1014 13:54:50.944737 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:54:50.945671 master-1 kubenswrapper[4740]: E1014 13:54:50.945294 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:55:02.944965 master-1 kubenswrapper[4740]: I1014 13:55:02.944859 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:55:02.946351 master-1 kubenswrapper[4740]: E1014 13:55:02.945515 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:55:17.944116 master-1 kubenswrapper[4740]: I1014 13:55:17.944013 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:55:18.553012 master-1 kubenswrapper[4740]: I1014 13:55:18.552798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4"} Oct 14 13:55:19.567767 master-1 kubenswrapper[4740]: I1014 13:55:19.567657 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" exitCode=1 Oct 14 13:55:19.567767 master-1 kubenswrapper[4740]: I1014 13:55:19.567737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4"} Oct 14 13:55:19.568507 master-1 kubenswrapper[4740]: I1014 13:55:19.567813 4740 scope.go:117] "RemoveContainer" containerID="088140fd0f2897f898e77362755a30ac165847da719742050074aafd1190060e" Oct 14 13:55:19.569100 master-1 kubenswrapper[4740]: I1014 13:55:19.569051 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:55:19.570517 master-1 kubenswrapper[4740]: E1014 13:55:19.569795 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:55:32.945131 master-1 kubenswrapper[4740]: I1014 13:55:32.945015 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:55:32.946673 master-1 kubenswrapper[4740]: E1014 13:55:32.945533 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:55:44.950899 master-1 kubenswrapper[4740]: I1014 13:55:44.950836 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:55:44.951731 master-1 kubenswrapper[4740]: E1014 13:55:44.951077 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:55:58.944667 master-1 kubenswrapper[4740]: I1014 13:55:58.944551 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:55:58.945561 master-1 kubenswrapper[4740]: E1014 13:55:58.945095 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:56:11.943901 master-1 kubenswrapper[4740]: I1014 13:56:11.943799 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:56:11.945134 master-1 kubenswrapper[4740]: E1014 13:56:11.944648 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:56:25.945185 master-1 kubenswrapper[4740]: I1014 13:56:25.945114 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:56:25.946030 master-1 kubenswrapper[4740]: E1014 13:56:25.945702 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:56:40.944634 master-1 kubenswrapper[4740]: I1014 13:56:40.944523 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:56:40.946027 master-1 kubenswrapper[4740]: E1014 13:56:40.944960 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:56:53.944510 master-1 kubenswrapper[4740]: I1014 13:56:53.944418 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:56:53.945577 master-1 kubenswrapper[4740]: E1014 13:56:53.944794 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:57:08.944310 master-1 kubenswrapper[4740]: I1014 13:57:08.944212 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:57:08.944869 master-1 kubenswrapper[4740]: E1014 13:57:08.944671 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:57:20.944539 master-1 kubenswrapper[4740]: I1014 13:57:20.944475 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:57:20.945630 master-1 kubenswrapper[4740]: E1014 13:57:20.944776 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:57:31.944532 master-1 kubenswrapper[4740]: I1014 13:57:31.944406 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:57:31.945727 master-1 kubenswrapper[4740]: E1014 13:57:31.944894 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:57:44.956079 master-1 kubenswrapper[4740]: I1014 13:57:44.955993 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:57:44.957383 master-1 kubenswrapper[4740]: E1014 13:57:44.956584 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:57:58.943944 master-1 kubenswrapper[4740]: I1014 13:57:58.943869 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:57:58.945161 master-1 kubenswrapper[4740]: E1014 13:57:58.944687 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:58:09.944258 master-1 kubenswrapper[4740]: I1014 13:58:09.944179 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:58:10.366177 master-1 kubenswrapper[4740]: I1014 13:58:10.364628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerStarted","Data":"077cd98588b6172859b35a5f292c63d3d6ac63af21dd407e683bfefc0205e4a1"} Oct 14 13:58:11.380553 master-1 kubenswrapper[4740]: I1014 13:58:11.380480 4740 generic.go:334] "Generic (PLEG): container finished" podID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerID="077cd98588b6172859b35a5f292c63d3d6ac63af21dd407e683bfefc0205e4a1" exitCode=1 Oct 14 13:58:11.380553 master-1 kubenswrapper[4740]: I1014 13:58:11.380548 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"077cd98588b6172859b35a5f292c63d3d6ac63af21dd407e683bfefc0205e4a1"} Oct 14 13:58:11.381495 master-1 kubenswrapper[4740]: I1014 13:58:11.380601 4740 scope.go:117] "RemoveContainer" containerID="9b64f4ed3a9ce2733417d6d6a12a714fd0a469220413608afe65b893ab8c9ee4" Oct 14 13:58:11.382091 master-1 kubenswrapper[4740]: I1014 13:58:11.382017 4740 scope.go:117] "RemoveContainer" containerID="077cd98588b6172859b35a5f292c63d3d6ac63af21dd407e683bfefc0205e4a1" Oct 14 13:58:11.383031 master-1 kubenswrapper[4740]: E1014 13:58:11.382960 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=barbican-db-sync pod=barbican-db-sync-dtcg4_openstack(1a5b94c7-091a-4ada-bce0-85931aa7eb50)\"" pod="openstack/barbican-db-sync-dtcg4" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" Oct 14 13:58:11.462329 master-1 kubenswrapper[4740]: I1014 13:58:11.462276 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dtcg4"] Oct 14 13:58:12.890489 master-1 kubenswrapper[4740]: I1014 13:58:12.890452 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:58:13.001961 master-1 kubenswrapper[4740]: I1014 13:58:13.001874 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-db-sync-config-data\") pod \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " Oct 14 13:58:13.002475 master-1 kubenswrapper[4740]: I1014 13:58:13.002072 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-combined-ca-bundle\") pod \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " Oct 14 13:58:13.002475 master-1 kubenswrapper[4740]: I1014 13:58:13.002186 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl5q8\" (UniqueName: \"kubernetes.io/projected/1a5b94c7-091a-4ada-bce0-85931aa7eb50-kube-api-access-wl5q8\") pod \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\" (UID: \"1a5b94c7-091a-4ada-bce0-85931aa7eb50\") " Oct 14 13:58:13.006960 master-1 kubenswrapper[4740]: I1014 13:58:13.006912 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5b94c7-091a-4ada-bce0-85931aa7eb50-kube-api-access-wl5q8" (OuterVolumeSpecName: "kube-api-access-wl5q8") pod "1a5b94c7-091a-4ada-bce0-85931aa7eb50" (UID: "1a5b94c7-091a-4ada-bce0-85931aa7eb50"). InnerVolumeSpecName "kube-api-access-wl5q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:58:13.010213 master-1 kubenswrapper[4740]: I1014 13:58:13.010150 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1a5b94c7-091a-4ada-bce0-85931aa7eb50" (UID: "1a5b94c7-091a-4ada-bce0-85931aa7eb50"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:58:13.047260 master-1 kubenswrapper[4740]: I1014 13:58:13.047135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a5b94c7-091a-4ada-bce0-85931aa7eb50" (UID: "1a5b94c7-091a-4ada-bce0-85931aa7eb50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:58:13.106224 master-1 kubenswrapper[4740]: I1014 13:58:13.106054 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-db-sync-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 13:58:13.106224 master-1 kubenswrapper[4740]: I1014 13:58:13.106124 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a5b94c7-091a-4ada-bce0-85931aa7eb50-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 13:58:13.106224 master-1 kubenswrapper[4740]: I1014 13:58:13.106150 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl5q8\" (UniqueName: \"kubernetes.io/projected/1a5b94c7-091a-4ada-bce0-85931aa7eb50-kube-api-access-wl5q8\") on node \"master-1\" DevicePath \"\"" Oct 14 13:58:13.407089 master-1 kubenswrapper[4740]: I1014 13:58:13.406676 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dtcg4" event={"ID":"1a5b94c7-091a-4ada-bce0-85931aa7eb50","Type":"ContainerDied","Data":"4543ed220e5137388000c7c420db35b29fdcb3bb65707fad6ac8c0aff6ff32f1"} Oct 14 13:58:13.407633 master-1 kubenswrapper[4740]: I1014 13:58:13.406801 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dtcg4" Oct 14 13:58:13.407633 master-1 kubenswrapper[4740]: I1014 13:58:13.407442 4740 scope.go:117] "RemoveContainer" containerID="077cd98588b6172859b35a5f292c63d3d6ac63af21dd407e683bfefc0205e4a1" Oct 14 13:58:13.537557 master-1 kubenswrapper[4740]: I1014 13:58:13.537471 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dtcg4"] Oct 14 13:58:13.615381 master-1 kubenswrapper[4740]: I1014 13:58:13.615308 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dtcg4"] Oct 14 13:58:14.963348 master-1 kubenswrapper[4740]: I1014 13:58:14.960152 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" path="/var/lib/kubelet/pods/1a5b94c7-091a-4ada-bce0-85931aa7eb50/volumes" Oct 14 14:00:00.174771 master-1 kubenswrapper[4740]: I1014 14:00:00.174705 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t"] Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: E1014 14:00:00.175025 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175039 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: E1014 14:00:00.175051 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175057 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: E1014 14:00:00.175068 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175074 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: E1014 14:00:00.175084 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175089 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: E1014 14:00:00.175098 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175103 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175320 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175333 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175342 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175358 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.175366 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:00:00.176057 master-1 kubenswrapper[4740]: I1014 14:00:00.176065 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.179554 master-1 kubenswrapper[4740]: I1014 14:00:00.179503 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 14:00:00.179554 master-1 kubenswrapper[4740]: I1014 14:00:00.179533 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-t5gjh" Oct 14 14:00:00.199354 master-1 kubenswrapper[4740]: I1014 14:00:00.199263 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t"] Oct 14 14:00:00.294729 master-1 kubenswrapper[4740]: I1014 14:00:00.294685 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30c1385d-7c82-43a7-8c25-469cb5366234-config-volume\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.295018 master-1 kubenswrapper[4740]: I1014 14:00:00.295001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqk6\" (UniqueName: \"kubernetes.io/projected/30c1385d-7c82-43a7-8c25-469cb5366234-kube-api-access-gjqk6\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.295152 master-1 kubenswrapper[4740]: I1014 14:00:00.295139 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30c1385d-7c82-43a7-8c25-469cb5366234-secret-volume\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.399654 master-1 kubenswrapper[4740]: I1014 14:00:00.399579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30c1385d-7c82-43a7-8c25-469cb5366234-config-volume\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.400099 master-1 kubenswrapper[4740]: I1014 14:00:00.399743 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqk6\" (UniqueName: \"kubernetes.io/projected/30c1385d-7c82-43a7-8c25-469cb5366234-kube-api-access-gjqk6\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.400987 master-1 kubenswrapper[4740]: I1014 14:00:00.400930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30c1385d-7c82-43a7-8c25-469cb5366234-secret-volume\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.401816 master-1 kubenswrapper[4740]: I1014 14:00:00.401306 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30c1385d-7c82-43a7-8c25-469cb5366234-config-volume\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.422758 master-1 kubenswrapper[4740]: I1014 14:00:00.422694 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30c1385d-7c82-43a7-8c25-469cb5366234-secret-volume\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.430088 master-1 kubenswrapper[4740]: I1014 14:00:00.429987 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqk6\" (UniqueName: \"kubernetes.io/projected/30c1385d-7c82-43a7-8c25-469cb5366234-kube-api-access-gjqk6\") pod \"collect-profiles-29340840-w6v9t\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.498112 master-1 kubenswrapper[4740]: I1014 14:00:00.498058 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:00.960667 master-1 kubenswrapper[4740]: I1014 14:00:00.960612 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t"] Oct 14 14:00:00.964839 master-1 kubenswrapper[4740]: W1014 14:00:00.964211 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c1385d_7c82_43a7_8c25_469cb5366234.slice/crio-c90f02c79534b0ab7e0bf7c59832c29a5291def9d30db1bdc4398500695fcd06 WatchSource:0}: Error finding container c90f02c79534b0ab7e0bf7c59832c29a5291def9d30db1bdc4398500695fcd06: Status 404 returned error can't find the container with id c90f02c79534b0ab7e0bf7c59832c29a5291def9d30db1bdc4398500695fcd06 Oct 14 14:00:01.599059 master-1 kubenswrapper[4740]: I1014 14:00:01.598968 4740 generic.go:334] "Generic (PLEG): container finished" podID="30c1385d-7c82-43a7-8c25-469cb5366234" containerID="4b6746d13241c9be0811757273cd1931b0d698bce9e830e34395b2a25334f45c" exitCode=0 Oct 14 14:00:01.599932 master-1 kubenswrapper[4740]: I1014 14:00:01.599061 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" event={"ID":"30c1385d-7c82-43a7-8c25-469cb5366234","Type":"ContainerDied","Data":"4b6746d13241c9be0811757273cd1931b0d698bce9e830e34395b2a25334f45c"} Oct 14 14:00:01.599932 master-1 kubenswrapper[4740]: I1014 14:00:01.599165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" event={"ID":"30c1385d-7c82-43a7-8c25-469cb5366234","Type":"ContainerStarted","Data":"c90f02c79534b0ab7e0bf7c59832c29a5291def9d30db1bdc4398500695fcd06"} Oct 14 14:00:03.062937 master-1 kubenswrapper[4740]: I1014 14:00:03.062831 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:00:03.173115 master-1 kubenswrapper[4740]: I1014 14:00:03.173043 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30c1385d-7c82-43a7-8c25-469cb5366234-secret-volume\") pod \"30c1385d-7c82-43a7-8c25-469cb5366234\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " Oct 14 14:00:03.173388 master-1 kubenswrapper[4740]: I1014 14:00:03.173342 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjqk6\" (UniqueName: \"kubernetes.io/projected/30c1385d-7c82-43a7-8c25-469cb5366234-kube-api-access-gjqk6\") pod \"30c1385d-7c82-43a7-8c25-469cb5366234\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " Oct 14 14:00:03.173608 master-1 kubenswrapper[4740]: I1014 14:00:03.173573 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30c1385d-7c82-43a7-8c25-469cb5366234-config-volume\") pod \"30c1385d-7c82-43a7-8c25-469cb5366234\" (UID: \"30c1385d-7c82-43a7-8c25-469cb5366234\") " Oct 14 14:00:03.174285 master-1 kubenswrapper[4740]: I1014 14:00:03.174219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c1385d-7c82-43a7-8c25-469cb5366234-config-volume" (OuterVolumeSpecName: "config-volume") pod "30c1385d-7c82-43a7-8c25-469cb5366234" (UID: "30c1385d-7c82-43a7-8c25-469cb5366234"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:00:03.174539 master-1 kubenswrapper[4740]: I1014 14:00:03.174484 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30c1385d-7c82-43a7-8c25-469cb5366234-config-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 14:00:03.178614 master-1 kubenswrapper[4740]: I1014 14:00:03.178552 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c1385d-7c82-43a7-8c25-469cb5366234-kube-api-access-gjqk6" (OuterVolumeSpecName: "kube-api-access-gjqk6") pod "30c1385d-7c82-43a7-8c25-469cb5366234" (UID: "30c1385d-7c82-43a7-8c25-469cb5366234"). InnerVolumeSpecName "kube-api-access-gjqk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:00:03.179602 master-1 kubenswrapper[4740]: I1014 14:00:03.179525 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c1385d-7c82-43a7-8c25-469cb5366234-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "30c1385d-7c82-43a7-8c25-469cb5366234" (UID: "30c1385d-7c82-43a7-8c25-469cb5366234"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:00:03.276254 master-1 kubenswrapper[4740]: I1014 14:00:03.276196 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30c1385d-7c82-43a7-8c25-469cb5366234-secret-volume\") on node \"master-1\" DevicePath \"\"" Oct 14 14:00:03.276254 master-1 kubenswrapper[4740]: I1014 14:00:03.276254 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjqk6\" (UniqueName: \"kubernetes.io/projected/30c1385d-7c82-43a7-8c25-469cb5366234-kube-api-access-gjqk6\") on node \"master-1\" DevicePath \"\"" Oct 14 14:00:03.618080 master-1 kubenswrapper[4740]: I1014 14:00:03.617906 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" event={"ID":"30c1385d-7c82-43a7-8c25-469cb5366234","Type":"ContainerDied","Data":"c90f02c79534b0ab7e0bf7c59832c29a5291def9d30db1bdc4398500695fcd06"} Oct 14 14:00:03.618080 master-1 kubenswrapper[4740]: I1014 14:00:03.617965 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90f02c79534b0ab7e0bf7c59832c29a5291def9d30db1bdc4398500695fcd06" Oct 14 14:00:03.618080 master-1 kubenswrapper[4740]: I1014 14:00:03.617994 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t" Oct 14 14:01:00.209641 master-1 kubenswrapper[4740]: I1014 14:01:00.209563 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29340841-4g4qq"] Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: E1014 14:01:00.209961 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: I1014 14:01:00.209976 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: E1014 14:01:00.209992 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: I1014 14:01:00.210000 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: E1014 14:01:00.210046 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c1385d-7c82-43a7-8c25-469cb5366234" containerName="collect-profiles" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: I1014 14:01:00.210054 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c1385d-7c82-43a7-8c25-469cb5366234" containerName="collect-profiles" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: I1014 14:01:00.210255 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c1385d-7c82-43a7-8c25-469cb5366234" containerName="collect-profiles" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: I1014 14:01:00.210279 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:01:00.210337 master-1 kubenswrapper[4740]: I1014 14:01:00.210291 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b94c7-091a-4ada-bce0-85931aa7eb50" containerName="barbican-db-sync" Oct 14 14:01:00.211139 master-1 kubenswrapper[4740]: I1014 14:01:00.211083 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.229420 master-1 kubenswrapper[4740]: I1014 14:01:00.229259 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340841-4g4qq"] Oct 14 14:01:00.290022 master-1 kubenswrapper[4740]: I1014 14:01:00.289935 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-fernet-keys\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.290323 master-1 kubenswrapper[4740]: I1014 14:01:00.290128 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-config-data\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.290323 master-1 kubenswrapper[4740]: I1014 14:01:00.290186 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-combined-ca-bundle\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.290323 master-1 kubenswrapper[4740]: I1014 14:01:00.290256 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjgv\" (UniqueName: \"kubernetes.io/projected/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-kube-api-access-zdjgv\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.392640 master-1 kubenswrapper[4740]: I1014 14:01:00.392566 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-config-data\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.392640 master-1 kubenswrapper[4740]: I1014 14:01:00.392652 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-combined-ca-bundle\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.392997 master-1 kubenswrapper[4740]: I1014 14:01:00.392700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjgv\" (UniqueName: \"kubernetes.io/projected/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-kube-api-access-zdjgv\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.392997 master-1 kubenswrapper[4740]: I1014 14:01:00.392761 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-fernet-keys\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.407397 master-1 kubenswrapper[4740]: I1014 14:01:00.407317 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-combined-ca-bundle\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.407786 master-1 kubenswrapper[4740]: I1014 14:01:00.407703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-fernet-keys\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.408593 master-1 kubenswrapper[4740]: I1014 14:01:00.408525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-config-data\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.420386 master-1 kubenswrapper[4740]: I1014 14:01:00.420297 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjgv\" (UniqueName: \"kubernetes.io/projected/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-kube-api-access-zdjgv\") pod \"keystone-cron-29340841-4g4qq\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.529384 master-1 kubenswrapper[4740]: I1014 14:01:00.529305 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:00.992426 master-1 kubenswrapper[4740]: I1014 14:01:00.992332 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340841-4g4qq"] Oct 14 14:01:01.193013 master-1 kubenswrapper[4740]: I1014 14:01:01.192961 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-4g4qq" event={"ID":"28afb4e1-79e3-4131-ab1c-7ed92bf203d6","Type":"ContainerStarted","Data":"677bdf06a566f2259ea31a45e130d8587c3562464daaff2b4e09ad4252640820"} Oct 14 14:01:02.207487 master-1 kubenswrapper[4740]: I1014 14:01:02.207398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-4g4qq" event={"ID":"28afb4e1-79e3-4131-ab1c-7ed92bf203d6","Type":"ContainerStarted","Data":"5c8c48b7ec6958d6bef7304658e6f0b2a4ee0ef05e41f2deef4d387cf3750a8b"} Oct 14 14:01:02.244488 master-1 kubenswrapper[4740]: I1014 14:01:02.244315 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29340841-4g4qq" podStartSLOduration=2.244283279 podStartE2EDuration="2.244283279s" podCreationTimestamp="2025-10-14 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:01:02.239332124 +0000 UTC m=+3288.049621463" watchObservedRunningTime="2025-10-14 14:01:02.244283279 +0000 UTC m=+3288.054572608" Oct 14 14:01:03.220895 master-1 kubenswrapper[4740]: I1014 14:01:03.220761 4740 generic.go:334] "Generic (PLEG): container finished" podID="28afb4e1-79e3-4131-ab1c-7ed92bf203d6" containerID="5c8c48b7ec6958d6bef7304658e6f0b2a4ee0ef05e41f2deef4d387cf3750a8b" exitCode=0 Oct 14 14:01:03.221662 master-1 kubenswrapper[4740]: I1014 14:01:03.220859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-4g4qq" event={"ID":"28afb4e1-79e3-4131-ab1c-7ed92bf203d6","Type":"ContainerDied","Data":"5c8c48b7ec6958d6bef7304658e6f0b2a4ee0ef05e41f2deef4d387cf3750a8b"} Oct 14 14:01:04.721086 master-1 kubenswrapper[4740]: I1014 14:01:04.721044 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:01:04.800123 master-1 kubenswrapper[4740]: I1014 14:01:04.800028 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-fernet-keys\") pod \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " Oct 14 14:01:04.800470 master-1 kubenswrapper[4740]: I1014 14:01:04.800177 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-config-data\") pod \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " Oct 14 14:01:04.800660 master-1 kubenswrapper[4740]: I1014 14:01:04.800577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjgv\" (UniqueName: \"kubernetes.io/projected/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-kube-api-access-zdjgv\") pod \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " Oct 14 14:01:04.800660 master-1 kubenswrapper[4740]: I1014 14:01:04.800648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-combined-ca-bundle\") pod \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\" (UID: \"28afb4e1-79e3-4131-ab1c-7ed92bf203d6\") " Oct 14 14:01:04.805392 master-1 kubenswrapper[4740]: I1014 14:01:04.805290 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-kube-api-access-zdjgv" (OuterVolumeSpecName: "kube-api-access-zdjgv") pod "28afb4e1-79e3-4131-ab1c-7ed92bf203d6" (UID: "28afb4e1-79e3-4131-ab1c-7ed92bf203d6"). InnerVolumeSpecName "kube-api-access-zdjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:01:04.806146 master-1 kubenswrapper[4740]: I1014 14:01:04.806071 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28afb4e1-79e3-4131-ab1c-7ed92bf203d6" (UID: "28afb4e1-79e3-4131-ab1c-7ed92bf203d6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:01:04.830210 master-1 kubenswrapper[4740]: I1014 14:01:04.830104 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28afb4e1-79e3-4131-ab1c-7ed92bf203d6" (UID: "28afb4e1-79e3-4131-ab1c-7ed92bf203d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:01:04.866142 master-1 kubenswrapper[4740]: I1014 14:01:04.865976 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-config-data" (OuterVolumeSpecName: "config-data") pod "28afb4e1-79e3-4131-ab1c-7ed92bf203d6" (UID: "28afb4e1-79e3-4131-ab1c-7ed92bf203d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:01:04.904255 master-1 kubenswrapper[4740]: I1014 14:01:04.904145 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjgv\" (UniqueName: \"kubernetes.io/projected/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-kube-api-access-zdjgv\") on node \"master-1\" DevicePath \"\"" Oct 14 14:01:04.904255 master-1 kubenswrapper[4740]: I1014 14:01:04.904224 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-combined-ca-bundle\") on node \"master-1\" DevicePath \"\"" Oct 14 14:01:04.904255 master-1 kubenswrapper[4740]: I1014 14:01:04.904255 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-fernet-keys\") on node \"master-1\" DevicePath \"\"" Oct 14 14:01:04.904530 master-1 kubenswrapper[4740]: I1014 14:01:04.904269 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28afb4e1-79e3-4131-ab1c-7ed92bf203d6-config-data\") on node \"master-1\" DevicePath \"\"" Oct 14 14:01:05.252918 master-1 kubenswrapper[4740]: I1014 14:01:05.252795 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-4g4qq" event={"ID":"28afb4e1-79e3-4131-ab1c-7ed92bf203d6","Type":"ContainerDied","Data":"677bdf06a566f2259ea31a45e130d8587c3562464daaff2b4e09ad4252640820"} Oct 14 14:01:05.252918 master-1 kubenswrapper[4740]: I1014 14:01:05.252894 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="677bdf06a566f2259ea31a45e130d8587c3562464daaff2b4e09ad4252640820" Oct 14 14:01:05.254084 master-1 kubenswrapper[4740]: I1014 14:01:05.253282 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-4g4qq" Oct 14 14:04:13.167833 master-1 kubenswrapper[4740]: I1014 14:04:13.167691 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zqxwl/must-gather-w5zhx"] Oct 14 14:04:13.168591 master-1 kubenswrapper[4740]: E1014 14:04:13.168180 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28afb4e1-79e3-4131-ab1c-7ed92bf203d6" containerName="keystone-cron" Oct 14 14:04:13.168591 master-1 kubenswrapper[4740]: I1014 14:04:13.168197 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="28afb4e1-79e3-4131-ab1c-7ed92bf203d6" containerName="keystone-cron" Oct 14 14:04:13.168591 master-1 kubenswrapper[4740]: I1014 14:04:13.168515 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="28afb4e1-79e3-4131-ab1c-7ed92bf203d6" containerName="keystone-cron" Oct 14 14:04:13.169941 master-1 kubenswrapper[4740]: I1014 14:04:13.169903 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.172677 master-1 kubenswrapper[4740]: I1014 14:04:13.172646 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zqxwl"/"openshift-service-ca.crt" Oct 14 14:04:13.173204 master-1 kubenswrapper[4740]: I1014 14:04:13.173181 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zqxwl"/"kube-root-ca.crt" Oct 14 14:04:13.192854 master-1 kubenswrapper[4740]: I1014 14:04:13.192778 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zqxwl/must-gather-w5zhx"] Oct 14 14:04:13.223285 master-1 kubenswrapper[4740]: I1014 14:04:13.207584 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjdv\" (UniqueName: \"kubernetes.io/projected/57c6156e-c43d-454e-a0c7-87c95e28864c-kube-api-access-lcjdv\") pod \"must-gather-w5zhx\" (UID: \"57c6156e-c43d-454e-a0c7-87c95e28864c\") " pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.223285 master-1 kubenswrapper[4740]: I1014 14:04:13.207744 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/57c6156e-c43d-454e-a0c7-87c95e28864c-must-gather-output\") pod \"must-gather-w5zhx\" (UID: \"57c6156e-c43d-454e-a0c7-87c95e28864c\") " pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.310068 master-1 kubenswrapper[4740]: I1014 14:04:13.309982 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/57c6156e-c43d-454e-a0c7-87c95e28864c-must-gather-output\") pod \"must-gather-w5zhx\" (UID: \"57c6156e-c43d-454e-a0c7-87c95e28864c\") " pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.310338 master-1 kubenswrapper[4740]: I1014 14:04:13.310300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcjdv\" (UniqueName: \"kubernetes.io/projected/57c6156e-c43d-454e-a0c7-87c95e28864c-kube-api-access-lcjdv\") pod \"must-gather-w5zhx\" (UID: \"57c6156e-c43d-454e-a0c7-87c95e28864c\") " pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.311944 master-1 kubenswrapper[4740]: I1014 14:04:13.310748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/57c6156e-c43d-454e-a0c7-87c95e28864c-must-gather-output\") pod \"must-gather-w5zhx\" (UID: \"57c6156e-c43d-454e-a0c7-87c95e28864c\") " pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.349256 master-1 kubenswrapper[4740]: I1014 14:04:13.349180 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcjdv\" (UniqueName: \"kubernetes.io/projected/57c6156e-c43d-454e-a0c7-87c95e28864c-kube-api-access-lcjdv\") pod \"must-gather-w5zhx\" (UID: \"57c6156e-c43d-454e-a0c7-87c95e28864c\") " pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:13.487319 master-1 kubenswrapper[4740]: I1014 14:04:13.487284 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/must-gather-w5zhx" Oct 14 14:04:14.020668 master-1 kubenswrapper[4740]: I1014 14:04:14.020590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zqxwl/must-gather-w5zhx"] Oct 14 14:04:14.024005 master-1 kubenswrapper[4740]: I1014 14:04:14.023978 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 14:04:14.171875 master-1 kubenswrapper[4740]: I1014 14:04:14.171788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/must-gather-w5zhx" event={"ID":"57c6156e-c43d-454e-a0c7-87c95e28864c","Type":"ContainerStarted","Data":"0e034e5b8a4bf081b2774423e1a713a72e7792bf5712fd18df43408aea1387a5"} Oct 14 14:04:16.194690 master-1 kubenswrapper[4740]: I1014 14:04:16.194616 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/must-gather-w5zhx" event={"ID":"57c6156e-c43d-454e-a0c7-87c95e28864c","Type":"ContainerStarted","Data":"610a40fb1ba23d0e4f4cb9e5d5ecd97d4dfe17b897c4df4353c376bc81e46bb0"} Oct 14 14:04:16.195260 master-1 kubenswrapper[4740]: I1014 14:04:16.194699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/must-gather-w5zhx" event={"ID":"57c6156e-c43d-454e-a0c7-87c95e28864c","Type":"ContainerStarted","Data":"a06e7f115f512ec99446047535b07b2dea620f3c0f516757144c6bb0e34ed3d5"} Oct 14 14:04:16.230006 master-1 kubenswrapper[4740]: I1014 14:04:16.229853 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zqxwl/must-gather-w5zhx" podStartSLOduration=1.783968254 podStartE2EDuration="3.229837066s" podCreationTimestamp="2025-10-14 14:04:13 +0000 UTC" firstStartedPulling="2025-10-14 14:04:14.023935864 +0000 UTC m=+3479.834225183" lastFinishedPulling="2025-10-14 14:04:15.469804666 +0000 UTC m=+3481.280093995" observedRunningTime="2025-10-14 14:04:16.22049003 +0000 UTC m=+3482.030779379" watchObservedRunningTime="2025-10-14 14:04:16.229837066 +0000 UTC m=+3482.040126395" Oct 14 14:04:20.762900 master-1 kubenswrapper[4740]: I1014 14:04:20.762856 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-h8v5p_b24fdf4a-7fd9-4c72-a69a-4e49362f526d/nmstate-console-plugin/0.log" Oct 14 14:04:20.818133 master-1 kubenswrapper[4740]: I1014 14:04:20.818062 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lkd88_c9295a10-bbff-4e50-ae75-2fef346b2e6e/nmstate-handler/0.log" Oct 14 14:04:22.197469 master-1 kubenswrapper[4740]: I1014 14:04:22.197386 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/controller/0.log" Oct 14 14:04:23.021344 master-1 kubenswrapper[4740]: I1014 14:04:23.021298 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/frr/0.log" Oct 14 14:04:23.035613 master-1 kubenswrapper[4740]: I1014 14:04:23.035568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/reloader/0.log" Oct 14 14:04:23.052360 master-1 kubenswrapper[4740]: I1014 14:04:23.052312 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/frr-metrics/0.log" Oct 14 14:04:23.071031 master-1 kubenswrapper[4740]: I1014 14:04:23.070983 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/kube-rbac-proxy/0.log" Oct 14 14:04:23.085323 master-1 kubenswrapper[4740]: I1014 14:04:23.085273 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/kube-rbac-proxy-frr/0.log" Oct 14 14:04:23.099316 master-1 kubenswrapper[4740]: I1014 14:04:23.099278 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/cp-frr-files/0.log" Oct 14 14:04:23.112003 master-1 kubenswrapper[4740]: I1014 14:04:23.111967 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/cp-reloader/0.log" Oct 14 14:04:23.131852 master-1 kubenswrapper[4740]: I1014 14:04:23.131813 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/cp-metrics/0.log" Oct 14 14:04:23.153922 master-1 kubenswrapper[4740]: I1014 14:04:23.153868 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-guard-master-1_e4b81afc-7eb3-4303-91f8-593c130da282/guard/0.log" Oct 14 14:04:23.844250 master-1 kubenswrapper[4740]: I1014 14:04:23.844029 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcdctl/0.log" Oct 14 14:04:23.916942 master-1 kubenswrapper[4740]: I1014 14:04:23.916812 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd/0.log" Oct 14 14:04:23.940795 master-1 kubenswrapper[4740]: I1014 14:04:23.940723 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-metrics/0.log" Oct 14 14:04:23.974250 master-1 kubenswrapper[4740]: I1014 14:04:23.973201 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-readyz/0.log" Oct 14 14:04:24.012858 master-1 kubenswrapper[4740]: I1014 14:04:24.012407 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-rev/0.log" Oct 14 14:04:24.034452 master-1 kubenswrapper[4740]: I1014 14:04:24.034421 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/setup/0.log" Oct 14 14:04:24.069970 master-1 kubenswrapper[4740]: I1014 14:04:24.069933 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-ensure-env-vars/0.log" Oct 14 14:04:24.074621 master-1 kubenswrapper[4740]: I1014 14:04:24.074586 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-65687bc9c8-h4cd4_442bd7e6-9cc3-4dc0-8d51-6f04492f2b5c/oauth-openshift/0.log" Oct 14 14:04:24.225401 master-1 kubenswrapper[4740]: I1014 14:04:24.225343 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-resources-copy/0.log" Oct 14 14:04:24.838730 master-1 kubenswrapper[4740]: I1014 14:04:24.838664 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mkjj_59cd9872-e0ab-4acd-b8c8-1fa1fd61e318/speaker/0.log" Oct 14 14:04:24.848834 master-1 kubenswrapper[4740]: I1014 14:04:24.848786 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mkjj_59cd9872-e0ab-4acd-b8c8-1fa1fd61e318/kube-rbac-proxy/0.log" Oct 14 14:04:24.918981 master-1 kubenswrapper[4740]: I1014 14:04:24.918436 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-10-master-1_cb24e814-5147-4bab-a2ac-0fa7b97b5ecf/installer/0.log" Oct 14 14:04:25.089704 master-1 kubenswrapper[4740]: I1014 14:04:25.089564 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_revision-pruner-10-master-1_0cf6b504-565c-4311-a44f-c7c9e6f03add/pruner/0.log" Oct 14 14:04:25.220807 master-1 kubenswrapper[4740]: I1014 14:04:25.220763 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/0.log" Oct 14 14:04:25.285268 master-1 kubenswrapper[4740]: I1014 14:04:25.284727 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-66df44bc95-gldlr_97b0a691-fe82-46b1-9f04-671aed7e10be/authentication-operator/1.log" Oct 14 14:04:25.418753 master-1 kubenswrapper[4740]: I1014 14:04:25.412605 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq"] Oct 14 14:04:25.418753 master-1 kubenswrapper[4740]: I1014 14:04:25.414384 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.453022 master-1 kubenswrapper[4740]: I1014 14:04:25.452963 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq"] Oct 14 14:04:25.521209 master-1 kubenswrapper[4740]: I1014 14:04:25.521135 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-proc\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.521209 master-1 kubenswrapper[4740]: I1014 14:04:25.521190 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-lib-modules\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.521528 master-1 kubenswrapper[4740]: I1014 14:04:25.521296 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff45s\" (UniqueName: \"kubernetes.io/projected/36ae30be-68f0-43f1-a36a-85769023082c-kube-api-access-ff45s\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.521528 master-1 kubenswrapper[4740]: I1014 14:04:25.521331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-sys\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.521528 master-1 kubenswrapper[4740]: I1014 14:04:25.521372 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-podres\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.574251 master-1 kubenswrapper[4740]: I1014 14:04:25.570862 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zqxwl/master-1-debug-x7c8l"] Oct 14 14:04:25.574251 master-1 kubenswrapper[4740]: I1014 14:04:25.571986 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.624765 master-1 kubenswrapper[4740]: I1014 14:04:25.624676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b87839b6-543d-47e5-8994-7898b8ebec3c-host\") pod \"master-1-debug-x7c8l\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.624765 master-1 kubenswrapper[4740]: I1014 14:04:25.624753 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-proc\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.624765 master-1 kubenswrapper[4740]: I1014 14:04:25.624770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-lib-modules\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625191 master-1 kubenswrapper[4740]: I1014 14:04:25.624819 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4g5\" (UniqueName: \"kubernetes.io/projected/b87839b6-543d-47e5-8994-7898b8ebec3c-kube-api-access-dx4g5\") pod \"master-1-debug-x7c8l\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.625191 master-1 kubenswrapper[4740]: I1014 14:04:25.624874 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff45s\" (UniqueName: \"kubernetes.io/projected/36ae30be-68f0-43f1-a36a-85769023082c-kube-api-access-ff45s\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625191 master-1 kubenswrapper[4740]: I1014 14:04:25.624906 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-sys\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625191 master-1 kubenswrapper[4740]: I1014 14:04:25.624944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-podres\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625191 master-1 kubenswrapper[4740]: I1014 14:04:25.625092 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-podres\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625191 master-1 kubenswrapper[4740]: I1014 14:04:25.625146 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-proc\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625722 master-1 kubenswrapper[4740]: I1014 14:04:25.625222 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-lib-modules\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.625722 master-1 kubenswrapper[4740]: I1014 14:04:25.625547 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/36ae30be-68f0-43f1-a36a-85769023082c-sys\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.647871 master-1 kubenswrapper[4740]: I1014 14:04:25.647821 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff45s\" (UniqueName: \"kubernetes.io/projected/36ae30be-68f0-43f1-a36a-85769023082c-kube-api-access-ff45s\") pod \"perf-node-gather-daemonset-zdpwq\" (UID: \"36ae30be-68f0-43f1-a36a-85769023082c\") " pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.726856 master-1 kubenswrapper[4740]: I1014 14:04:25.726720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b87839b6-543d-47e5-8994-7898b8ebec3c-host\") pod \"master-1-debug-x7c8l\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.726856 master-1 kubenswrapper[4740]: I1014 14:04:25.726810 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4g5\" (UniqueName: \"kubernetes.io/projected/b87839b6-543d-47e5-8994-7898b8ebec3c-kube-api-access-dx4g5\") pod \"master-1-debug-x7c8l\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.727275 master-1 kubenswrapper[4740]: I1014 14:04:25.727185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b87839b6-543d-47e5-8994-7898b8ebec3c-host\") pod \"master-1-debug-x7c8l\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.751184 master-1 kubenswrapper[4740]: I1014 14:04:25.751131 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4g5\" (UniqueName: \"kubernetes.io/projected/b87839b6-543d-47e5-8994-7898b8ebec3c-kube-api-access-dx4g5\") pod \"master-1-debug-x7c8l\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:25.761140 master-1 kubenswrapper[4740]: I1014 14:04:25.761089 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:25.911330 master-1 kubenswrapper[4740]: I1014 14:04:25.903851 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:26.311558 master-1 kubenswrapper[4740]: I1014 14:04:26.311226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" event={"ID":"b87839b6-543d-47e5-8994-7898b8ebec3c","Type":"ContainerStarted","Data":"b0190bfd95c10f661f86930c79e04c46f159c1ca93c4985a6747a6c3b6f6f736"} Oct 14 14:04:26.381954 master-1 kubenswrapper[4740]: W1014 14:04:26.381886 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod36ae30be_68f0_43f1_a36a_85769023082c.slice/crio-8367f7d46eb5756dd4040837b09729bf7b398ee6ba53071cc5eeed683e325286 WatchSource:0}: Error finding container 8367f7d46eb5756dd4040837b09729bf7b398ee6ba53071cc5eeed683e325286: Status 404 returned error can't find the container with id 8367f7d46eb5756dd4040837b09729bf7b398ee6ba53071cc5eeed683e325286 Oct 14 14:04:26.384065 master-1 kubenswrapper[4740]: I1014 14:04:26.383664 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq"] Oct 14 14:04:26.643776 master-1 kubenswrapper[4740]: I1014 14:04:26.643646 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-mzrkb_ebb13eb5-2870-4a31-a2b7-1a4e3b02bb67/assisted-installer-controller/0.log" Oct 14 14:04:26.673259 master-1 kubenswrapper[4740]: I1014 14:04:26.668010 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-xf924_b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28/router/3.log" Oct 14 14:04:26.674176 master-1 kubenswrapper[4740]: I1014 14:04:26.673596 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5ddb89f76-xf924_b1498c7d-1e0e-4a99-a0a0-bf6e05c7fd28/router/2.log" Oct 14 14:04:27.327785 master-1 kubenswrapper[4740]: I1014 14:04:27.327726 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" event={"ID":"36ae30be-68f0-43f1-a36a-85769023082c","Type":"ContainerStarted","Data":"e46e9ad7152bccbaef19bfb637287d9496992e2fff322fe62fb53951028eea1f"} Oct 14 14:04:27.327785 master-1 kubenswrapper[4740]: I1014 14:04:27.327780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" event={"ID":"36ae30be-68f0-43f1-a36a-85769023082c","Type":"ContainerStarted","Data":"8367f7d46eb5756dd4040837b09729bf7b398ee6ba53071cc5eeed683e325286"} Oct 14 14:04:27.328357 master-1 kubenswrapper[4740]: I1014 14:04:27.327882 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:27.351421 master-1 kubenswrapper[4740]: I1014 14:04:27.351320 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" podStartSLOduration=2.351278228 podStartE2EDuration="2.351278228s" podCreationTimestamp="2025-10-14 14:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:04:27.349213271 +0000 UTC m=+3493.159502610" watchObservedRunningTime="2025-10-14 14:04:27.351278228 +0000 UTC m=+3493.161567557" Oct 14 14:04:27.954597 master-1 kubenswrapper[4740]: I1014 14:04:27.954554 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b6784d654-g299n_b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd/oauth-apiserver/0.log" Oct 14 14:04:27.980662 master-1 kubenswrapper[4740]: I1014 14:04:27.980614 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b6784d654-g299n_b8edbc3a-5f27-44fb-bb3a-d35557ffc3bd/fix-audit-permissions/0.log" Oct 14 14:04:28.778639 master-1 kubenswrapper[4740]: I1014 14:04:28.777867 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-nmpfk_ab511c1d-28e3-448a-86ec-cea21871fd26/kube-rbac-proxy/0.log" Oct 14 14:04:28.829175 master-1 kubenswrapper[4740]: I1014 14:04:28.829145 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-7ff449c7c5-nmpfk_ab511c1d-28e3-448a-86ec-cea21871fd26/cluster-autoscaler-operator/0.log" Oct 14 14:04:28.850425 master-1 kubenswrapper[4740]: I1014 14:04:28.850397 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-kcckh_bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1/cluster-baremetal-operator/0.log" Oct 14 14:04:28.880158 master-1 kubenswrapper[4740]: I1014 14:04:28.880122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6c8fbf4498-kcckh_bb37dd4b-6e1f-4069-93f7-77e4d7bc27f1/baremetal-kube-rbac-proxy/0.log" Oct 14 14:04:28.909450 master-1 kubenswrapper[4740]: I1014 14:04:28.909385 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-84f9cbd5d9-n87md_a4ab71e1-9b1f-42ee-8abb-8f998e3cae74/control-plane-machine-set-operator/0.log" Oct 14 14:04:28.937557 master-1 kubenswrapper[4740]: I1014 14:04:28.937513 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-s66vj_b51ef0bc-8b0e-4fab-b101-660ed408924c/kube-rbac-proxy/0.log" Oct 14 14:04:28.971465 master-1 kubenswrapper[4740]: I1014 14:04:28.971423 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-9dbb96f7-s66vj_b51ef0bc-8b0e-4fab-b101-660ed408924c/machine-api-operator/0.log" Oct 14 14:04:31.648109 master-1 kubenswrapper[4740]: I1014 14:04:31.648056 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-5cf49b6487-4cf2d_1fa31cdd-e051-4987-a1a2-814fc7445e6b/kube-rbac-proxy/0.log" Oct 14 14:04:31.932703 master-1 kubenswrapper[4740]: I1014 14:04:31.932554 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-5cf49b6487-4cf2d_1fa31cdd-e051-4987-a1a2-814fc7445e6b/cloud-credential-operator/0.log" Oct 14 14:04:33.955393 master-1 kubenswrapper[4740]: I1014 14:04:33.955351 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-55957b47d5-vtkr6_f8b5ead9-7212-4a2f-8105-92d1c5384308/openshift-config-operator/1.log" Oct 14 14:04:33.958987 master-1 kubenswrapper[4740]: I1014 14:04:33.958949 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-55957b47d5-vtkr6_f8b5ead9-7212-4a2f-8105-92d1c5384308/openshift-config-operator/0.log" Oct 14 14:04:33.993978 master-1 kubenswrapper[4740]: I1014 14:04:33.993923 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-55957b47d5-vtkr6_f8b5ead9-7212-4a2f-8105-92d1c5384308/openshift-api/0.log" Oct 14 14:04:35.796034 master-1 kubenswrapper[4740]: I1014 14:04:35.795817 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-zqxwl/perf-node-gather-daemonset-zdpwq" Oct 14 14:04:38.423525 master-1 kubenswrapper[4740]: I1014 14:04:38.423413 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5958979c8-p9l2s_5fd95fb9-90cf-410f-9984-a31bfe8a5f76/console/0.log" Oct 14 14:04:38.444729 master-1 kubenswrapper[4740]: I1014 14:04:38.444672 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" event={"ID":"b87839b6-543d-47e5-8994-7898b8ebec3c","Type":"ContainerStarted","Data":"5b23e0e95a6d5b5cea3721600fd67f1cea3b5c7923a30ce36adb520dad37d53d"} Oct 14 14:04:38.466105 master-1 kubenswrapper[4740]: I1014 14:04:38.465822 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" podStartSLOduration=1.6639992700000001 podStartE2EDuration="13.465804608s" podCreationTimestamp="2025-10-14 14:04:25 +0000 UTC" firstStartedPulling="2025-10-14 14:04:25.961666063 +0000 UTC m=+3491.771955392" lastFinishedPulling="2025-10-14 14:04:37.763471401 +0000 UTC m=+3503.573760730" observedRunningTime="2025-10-14 14:04:38.465317044 +0000 UTC m=+3504.275606373" watchObservedRunningTime="2025-10-14 14:04:38.465804608 +0000 UTC m=+3504.276093937" Oct 14 14:04:38.466547 master-1 kubenswrapper[4740]: I1014 14:04:38.466521 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-65bb9777fc-bm4pw_a32f08cc-7db7-455b-b904-e74aef3a165a/download-server/0.log" Oct 14 14:04:39.531202 master-1 kubenswrapper[4740]: I1014 14:04:39.531155 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-56d4b95494-7ff2l_016573fd-7804-461e-83d7-1c019298f7c6/cluster-storage-operator/1.log" Oct 14 14:04:39.539631 master-1 kubenswrapper[4740]: I1014 14:04:39.539594 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-56d4b95494-7ff2l_016573fd-7804-461e-83d7-1c019298f7c6/cluster-storage-operator/0.log" Oct 14 14:04:39.563557 master-1 kubenswrapper[4740]: I1014 14:04:39.563510 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-ddd7d64cd-5s4kt_534fcd65-38f8-4d39-b4de-d7b2819318c7/snapshot-controller/0.log" Oct 14 14:04:39.613881 master-1 kubenswrapper[4740]: I1014 14:04:39.613828 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7ff96dd767-9htmf_db9c19df-41e6-4216-829f-dd2975ff5108/csi-snapshot-controller-operator/0.log" Oct 14 14:04:40.308308 master-1 kubenswrapper[4740]: I1014 14:04:40.308162 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-nh2qc_910af03d-893a-443d-b6ed-fe21c26951f4/dns-operator/0.log" Oct 14 14:04:40.338456 master-1 kubenswrapper[4740]: I1014 14:04:40.338398 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-7769d9677-nh2qc_910af03d-893a-443d-b6ed-fe21c26951f4/kube-rbac-proxy/0.log" Oct 14 14:04:41.229622 master-1 kubenswrapper[4740]: I1014 14:04:41.229526 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-zbv7v_f553d2c5-b9fb-49b5-baac-00d3384d6478/dns/0.log" Oct 14 14:04:41.256883 master-1 kubenswrapper[4740]: I1014 14:04:41.256836 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-zbv7v_f553d2c5-b9fb-49b5-baac-00d3384d6478/kube-rbac-proxy/0.log" Oct 14 14:04:41.356885 master-1 kubenswrapper[4740]: I1014 14:04:41.356844 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-lhshc_dc3c6b11-2798-41ca-8a29-2f4c99b0fa68/dns-node-resolver/0.log" Oct 14 14:04:42.136675 master-1 kubenswrapper[4740]: I1014 14:04:42.136611 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/0.log" Oct 14 14:04:42.177880 master-1 kubenswrapper[4740]: I1014 14:04:42.177791 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-6bddf7d79-dtp9l_2a2b886b-005d-4d02-a231-ddacf42775ea/etcd-operator/1.log" Oct 14 14:04:42.880798 master-1 kubenswrapper[4740]: I1014 14:04:42.880735 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-guard-master-1_e4b81afc-7eb3-4303-91f8-593c130da282/guard/0.log" Oct 14 14:04:43.536820 master-1 kubenswrapper[4740]: I1014 14:04:43.536752 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcdctl/0.log" Oct 14 14:04:43.652400 master-1 kubenswrapper[4740]: I1014 14:04:43.652307 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd/0.log" Oct 14 14:04:43.679185 master-1 kubenswrapper[4740]: I1014 14:04:43.679087 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-metrics/0.log" Oct 14 14:04:43.699947 master-1 kubenswrapper[4740]: I1014 14:04:43.699674 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-readyz/0.log" Oct 14 14:04:43.730076 master-1 kubenswrapper[4740]: I1014 14:04:43.730027 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-rev/0.log" Oct 14 14:04:43.756497 master-1 kubenswrapper[4740]: I1014 14:04:43.756450 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/setup/0.log" Oct 14 14:04:43.789499 master-1 kubenswrapper[4740]: I1014 14:04:43.789363 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-ensure-env-vars/0.log" Oct 14 14:04:43.816803 master-1 kubenswrapper[4740]: I1014 14:04:43.816759 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-1_dbeb1098f6b7e52b91afcf2e9b50b014/etcd-resources-copy/0.log" Oct 14 14:04:44.391470 master-1 kubenswrapper[4740]: I1014 14:04:44.390806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-10-master-1_cb24e814-5147-4bab-a2ac-0fa7b97b5ecf/installer/0.log" Oct 14 14:04:44.551778 master-1 kubenswrapper[4740]: I1014 14:04:44.551728 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_revision-pruner-10-master-1_0cf6b504-565c-4311-a44f-c7c9e6f03add/pruner/0.log" Oct 14 14:04:46.350146 master-1 kubenswrapper[4740]: I1014 14:04:46.350067 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-6b8674d7ff-gspqw_b1a35e1e-333f-480c-b1d6-059475700627/cluster-image-registry-operator/0.log" Oct 14 14:04:46.436254 master-1 kubenswrapper[4740]: I1014 14:04:46.436178 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xvwmq_e37236b2-d620-45d8-985a-913c91466842/node-ca/0.log" Oct 14 14:04:47.238413 master-1 kubenswrapper[4740]: I1014 14:04:47.238351 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/2.log" Oct 14 14:04:47.260947 master-1 kubenswrapper[4740]: I1014 14:04:47.260910 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/ingress-operator/3.log" Oct 14 14:04:47.278644 master-1 kubenswrapper[4740]: I1014 14:04:47.278603 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-766ddf4575-xhdjt_398ba6fd-0f8f-46af-b690-61a6eec9176b/kube-rbac-proxy/0.log" Oct 14 14:04:48.114012 master-1 kubenswrapper[4740]: I1014 14:04:48.113952 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-j76rq_b102298d-f60b-4003-b0b2-55cbada95967/serve-healthcheck-canary/0.log" Oct 14 14:04:48.542732 master-1 kubenswrapper[4740]: I1014 14:04:48.542620 4740 generic.go:334] "Generic (PLEG): container finished" podID="b87839b6-543d-47e5-8994-7898b8ebec3c" containerID="5b23e0e95a6d5b5cea3721600fd67f1cea3b5c7923a30ce36adb520dad37d53d" exitCode=0 Oct 14 14:04:48.542732 master-1 kubenswrapper[4740]: I1014 14:04:48.542697 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" event={"ID":"b87839b6-543d-47e5-8994-7898b8ebec3c","Type":"ContainerDied","Data":"5b23e0e95a6d5b5cea3721600fd67f1cea3b5c7923a30ce36adb520dad37d53d"} Oct 14 14:04:49.312280 master-1 kubenswrapper[4740]: I1014 14:04:49.310744 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-7dcf5bd85b-chrmm_63a7ff79-3d66-457a-bb4a-dc851ca9d4e8/insights-operator/0.log" Oct 14 14:04:49.313055 master-1 kubenswrapper[4740]: I1014 14:04:49.312760 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-7dcf5bd85b-chrmm_63a7ff79-3d66-457a-bb4a-dc851ca9d4e8/insights-operator/1.log" Oct 14 14:04:49.657770 master-1 kubenswrapper[4740]: I1014 14:04:49.657714 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:49.704379 master-1 kubenswrapper[4740]: I1014 14:04:49.704310 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zqxwl/master-1-debug-x7c8l"] Oct 14 14:04:49.747407 master-1 kubenswrapper[4740]: I1014 14:04:49.747342 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4g5\" (UniqueName: \"kubernetes.io/projected/b87839b6-543d-47e5-8994-7898b8ebec3c-kube-api-access-dx4g5\") pod \"b87839b6-543d-47e5-8994-7898b8ebec3c\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " Oct 14 14:04:49.747624 master-1 kubenswrapper[4740]: I1014 14:04:49.747422 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b87839b6-543d-47e5-8994-7898b8ebec3c-host\") pod \"b87839b6-543d-47e5-8994-7898b8ebec3c\" (UID: \"b87839b6-543d-47e5-8994-7898b8ebec3c\") " Oct 14 14:04:49.747664 master-1 kubenswrapper[4740]: I1014 14:04:49.747623 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87839b6-543d-47e5-8994-7898b8ebec3c-host" (OuterVolumeSpecName: "host") pod "b87839b6-543d-47e5-8994-7898b8ebec3c" (UID: "b87839b6-543d-47e5-8994-7898b8ebec3c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:04:49.747963 master-1 kubenswrapper[4740]: I1014 14:04:49.747930 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b87839b6-543d-47e5-8994-7898b8ebec3c-host\") on node \"master-1\" DevicePath \"\"" Oct 14 14:04:49.752184 master-1 kubenswrapper[4740]: I1014 14:04:49.752137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87839b6-543d-47e5-8994-7898b8ebec3c-kube-api-access-dx4g5" (OuterVolumeSpecName: "kube-api-access-dx4g5") pod "b87839b6-543d-47e5-8994-7898b8ebec3c" (UID: "b87839b6-543d-47e5-8994-7898b8ebec3c"). InnerVolumeSpecName "kube-api-access-dx4g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:04:49.850063 master-1 kubenswrapper[4740]: I1014 14:04:49.849922 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4g5\" (UniqueName: \"kubernetes.io/projected/b87839b6-543d-47e5-8994-7898b8ebec3c-kube-api-access-dx4g5\") on node \"master-1\" DevicePath \"\"" Oct 14 14:04:49.974648 master-1 kubenswrapper[4740]: I1014 14:04:49.974596 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zqxwl/master-1-debug-x7c8l"] Oct 14 14:04:50.559244 master-1 kubenswrapper[4740]: I1014 14:04:50.559186 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0190bfd95c10f661f86930c79e04c46f159c1ca93c4985a6747a6c3b6f6f736" Oct 14 14:04:50.559711 master-1 kubenswrapper[4740]: I1014 14:04:50.559277 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-x7c8l" Oct 14 14:04:50.965155 master-1 kubenswrapper[4740]: I1014 14:04:50.964972 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87839b6-543d-47e5-8994-7898b8ebec3c" path="/var/lib/kubelet/pods/b87839b6-543d-47e5-8994-7898b8ebec3c/volumes" Oct 14 14:04:51.810818 master-1 kubenswrapper[4740]: I1014 14:04:51.810715 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zqxwl/master-1-debug-77wkj"] Oct 14 14:04:51.811849 master-1 kubenswrapper[4740]: E1014 14:04:51.811189 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87839b6-543d-47e5-8994-7898b8ebec3c" containerName="container-00" Oct 14 14:04:51.811849 master-1 kubenswrapper[4740]: I1014 14:04:51.811205 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87839b6-543d-47e5-8994-7898b8ebec3c" containerName="container-00" Oct 14 14:04:51.811849 master-1 kubenswrapper[4740]: I1014 14:04:51.811430 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87839b6-543d-47e5-8994-7898b8ebec3c" containerName="container-00" Oct 14 14:04:51.812279 master-1 kubenswrapper[4740]: I1014 14:04:51.812177 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:51.896184 master-1 kubenswrapper[4740]: I1014 14:04:51.896114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d228\" (UniqueName: \"kubernetes.io/projected/b9500122-9951-4133-a509-7e83d49cf502-kube-api-access-4d228\") pod \"master-1-debug-77wkj\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:51.896444 master-1 kubenswrapper[4740]: I1014 14:04:51.896242 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9500122-9951-4133-a509-7e83d49cf502-host\") pod \"master-1-debug-77wkj\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:51.998098 master-1 kubenswrapper[4740]: I1014 14:04:51.998033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d228\" (UniqueName: \"kubernetes.io/projected/b9500122-9951-4133-a509-7e83d49cf502-kube-api-access-4d228\") pod \"master-1-debug-77wkj\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:51.998371 master-1 kubenswrapper[4740]: I1014 14:04:51.998148 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9500122-9951-4133-a509-7e83d49cf502-host\") pod \"master-1-debug-77wkj\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:51.998440 master-1 kubenswrapper[4740]: I1014 14:04:51.998368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9500122-9951-4133-a509-7e83d49cf502-host\") pod \"master-1-debug-77wkj\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:52.722837 master-1 kubenswrapper[4740]: I1014 14:04:52.722399 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d228\" (UniqueName: \"kubernetes.io/projected/b9500122-9951-4133-a509-7e83d49cf502-kube-api-access-4d228\") pod \"master-1-debug-77wkj\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:52.726733 master-1 kubenswrapper[4740]: I1014 14:04:52.726682 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:52.783223 master-1 kubenswrapper[4740]: W1014 14:04:52.783103 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9500122_9951_4133_a509_7e83d49cf502.slice/crio-66586e45568ff19139e8fbf566cf5cc971019b6b6413b02d173106b0d4b441c4 WatchSource:0}: Error finding container 66586e45568ff19139e8fbf566cf5cc971019b6b6413b02d173106b0d4b441c4: Status 404 returned error can't find the container with id 66586e45568ff19139e8fbf566cf5cc971019b6b6413b02d173106b0d4b441c4 Oct 14 14:04:53.274452 master-1 kubenswrapper[4740]: I1014 14:04:53.274335 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-5b5dd85dcc-cxtgh_62ef5e24-de36-454a-a34c-e741a86a6f96/cluster-monitoring-operator/0.log" Oct 14 14:04:53.483143 master-1 kubenswrapper[4740]: I1014 14:04:53.483063 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-75bcf9f5fd-xkw2l_bf86a34b-7648-4a6e-b4ec-931d2d016dc4/monitoring-plugin/0.log" Oct 14 14:04:53.592353 master-1 kubenswrapper[4740]: I1014 14:04:53.592214 4740 generic.go:334] "Generic (PLEG): container finished" podID="b9500122-9951-4133-a509-7e83d49cf502" containerID="13c7367557552864a1019c41e568306be32bf8362a1a582b40c48ed87f5a34e0" exitCode=1 Oct 14 14:04:53.592353 master-1 kubenswrapper[4740]: I1014 14:04:53.592293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" event={"ID":"b9500122-9951-4133-a509-7e83d49cf502","Type":"ContainerDied","Data":"13c7367557552864a1019c41e568306be32bf8362a1a582b40c48ed87f5a34e0"} Oct 14 14:04:53.592353 master-1 kubenswrapper[4740]: I1014 14:04:53.592332 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" event={"ID":"b9500122-9951-4133-a509-7e83d49cf502","Type":"ContainerStarted","Data":"66586e45568ff19139e8fbf566cf5cc971019b6b6413b02d173106b0d4b441c4"} Oct 14 14:04:53.675915 master-1 kubenswrapper[4740]: I1014 14:04:53.675858 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zqxwl/master-1-debug-77wkj"] Oct 14 14:04:53.684880 master-1 kubenswrapper[4740]: I1014 14:04:53.684830 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zqxwl/master-1-debug-77wkj"] Oct 14 14:04:53.716249 master-1 kubenswrapper[4740]: I1014 14:04:53.716172 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-p4nr9_218a63b9-61b7-4ca0-b1b1-bf5cf5260960/node-exporter/0.log" Oct 14 14:04:53.734523 master-1 kubenswrapper[4740]: I1014 14:04:53.734476 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-p4nr9_218a63b9-61b7-4ca0-b1b1-bf5cf5260960/kube-rbac-proxy/0.log" Oct 14 14:04:53.753063 master-1 kubenswrapper[4740]: I1014 14:04:53.752992 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-p4nr9_218a63b9-61b7-4ca0-b1b1-bf5cf5260960/init-textfile/0.log" Oct 14 14:04:54.701369 master-1 kubenswrapper[4740]: I1014 14:04:54.701290 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:54.755097 master-1 kubenswrapper[4740]: I1014 14:04:54.755001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9500122-9951-4133-a509-7e83d49cf502-host\") pod \"b9500122-9951-4133-a509-7e83d49cf502\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " Oct 14 14:04:54.755424 master-1 kubenswrapper[4740]: I1014 14:04:54.755160 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9500122-9951-4133-a509-7e83d49cf502-host" (OuterVolumeSpecName: "host") pod "b9500122-9951-4133-a509-7e83d49cf502" (UID: "b9500122-9951-4133-a509-7e83d49cf502"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:04:54.755424 master-1 kubenswrapper[4740]: I1014 14:04:54.755241 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d228\" (UniqueName: \"kubernetes.io/projected/b9500122-9951-4133-a509-7e83d49cf502-kube-api-access-4d228\") pod \"b9500122-9951-4133-a509-7e83d49cf502\" (UID: \"b9500122-9951-4133-a509-7e83d49cf502\") " Oct 14 14:04:54.756200 master-1 kubenswrapper[4740]: I1014 14:04:54.756138 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9500122-9951-4133-a509-7e83d49cf502-host\") on node \"master-1\" DevicePath \"\"" Oct 14 14:04:54.758428 master-1 kubenswrapper[4740]: I1014 14:04:54.758377 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9500122-9951-4133-a509-7e83d49cf502-kube-api-access-4d228" (OuterVolumeSpecName: "kube-api-access-4d228") pod "b9500122-9951-4133-a509-7e83d49cf502" (UID: "b9500122-9951-4133-a509-7e83d49cf502"). InnerVolumeSpecName "kube-api-access-4d228". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:04:54.858369 master-1 kubenswrapper[4740]: I1014 14:04:54.858002 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d228\" (UniqueName: \"kubernetes.io/projected/b9500122-9951-4133-a509-7e83d49cf502-kube-api-access-4d228\") on node \"master-1\" DevicePath \"\"" Oct 14 14:04:54.956396 master-1 kubenswrapper[4740]: I1014 14:04:54.956339 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9500122-9951-4133-a509-7e83d49cf502" path="/var/lib/kubelet/pods/b9500122-9951-4133-a509-7e83d49cf502/volumes" Oct 14 14:04:55.612420 master-1 kubenswrapper[4740]: I1014 14:04:55.612350 4740 scope.go:117] "RemoveContainer" containerID="13c7367557552864a1019c41e568306be32bf8362a1a582b40c48ed87f5a34e0" Oct 14 14:04:55.612736 master-1 kubenswrapper[4740]: I1014 14:04:55.612630 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zqxwl/master-1-debug-77wkj" Oct 14 14:04:55.915345 master-1 kubenswrapper[4740]: I1014 14:04:55.915162 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-79d5f95f5c-bg9c4_405aee2c-2eac-40f5-aa9e-e9ca6cf5ccd5/prometheus-operator-admission-webhook/0.log" Oct 14 14:04:56.058813 master-1 kubenswrapper[4740]: I1014 14:04:56.058725 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc99494f6-ds5gd_fa8361b8-f9e0-44d8-9ef1-766c6b0df517/thanos-query/0.log" Oct 14 14:04:56.123905 master-1 kubenswrapper[4740]: I1014 14:04:56.123847 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc99494f6-ds5gd_fa8361b8-f9e0-44d8-9ef1-766c6b0df517/kube-rbac-proxy-web/0.log" Oct 14 14:04:56.142504 master-1 kubenswrapper[4740]: I1014 14:04:56.142439 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc99494f6-ds5gd_fa8361b8-f9e0-44d8-9ef1-766c6b0df517/kube-rbac-proxy/0.log" Oct 14 14:04:56.171117 master-1 kubenswrapper[4740]: I1014 14:04:56.170704 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc99494f6-ds5gd_fa8361b8-f9e0-44d8-9ef1-766c6b0df517/prom-label-proxy/0.log" Oct 14 14:04:56.197071 master-1 kubenswrapper[4740]: I1014 14:04:56.197004 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc99494f6-ds5gd_fa8361b8-f9e0-44d8-9ef1-766c6b0df517/kube-rbac-proxy-rules/0.log" Oct 14 14:04:56.220681 master-1 kubenswrapper[4740]: I1014 14:04:56.220620 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc99494f6-ds5gd_fa8361b8-f9e0-44d8-9ef1-766c6b0df517/kube-rbac-proxy-metrics/0.log" Oct 14 14:04:59.866814 master-1 kubenswrapper[4740]: I1014 14:04:59.866661 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/controller/0.log" Oct 14 14:05:00.932509 master-1 kubenswrapper[4740]: I1014 14:05:00.932439 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/frr/0.log" Oct 14 14:05:00.958762 master-1 kubenswrapper[4740]: I1014 14:05:00.958694 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/reloader/0.log" Oct 14 14:05:00.984756 master-1 kubenswrapper[4740]: I1014 14:05:00.984294 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/frr-metrics/0.log" Oct 14 14:05:01.172965 master-1 kubenswrapper[4740]: I1014 14:05:01.172917 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/kube-rbac-proxy/0.log" Oct 14 14:05:01.289719 master-1 kubenswrapper[4740]: I1014 14:05:01.289662 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/kube-rbac-proxy-frr/0.log" Oct 14 14:05:01.347126 master-1 kubenswrapper[4740]: I1014 14:05:01.347075 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/cp-frr-files/0.log" Oct 14 14:05:01.423584 master-1 kubenswrapper[4740]: I1014 14:05:01.423490 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/cp-reloader/0.log" Oct 14 14:05:01.455966 master-1 kubenswrapper[4740]: I1014 14:05:01.455906 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nnbg4_eff61622-703c-47c7-a70a-a076562ca3a3/cp-metrics/0.log" Oct 14 14:05:03.241450 master-1 kubenswrapper[4740]: I1014 14:05:03.241378 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mkjj_59cd9872-e0ab-4acd-b8c8-1fa1fd61e318/speaker/0.log" Oct 14 14:05:03.264985 master-1 kubenswrapper[4740]: I1014 14:05:03.264927 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7mkjj_59cd9872-e0ab-4acd-b8c8-1fa1fd61e318/kube-rbac-proxy/0.log"